Learning Nonlinear Loop Invariants with Gated Continuous Logic Networks (Extended Version)

03/17/2020 ∙ by Jianan Yao, et al. ∙ Columbia University 0

Verifying real-world programs often requires inferring loop invariants with nonlinear constraints. This is especially true in programs that perform many numerical operations, such as control systems for avionics or industrial plants. Recently, data-driven methods for loop invariant inference have shown promise, especially on linear invariants. However, applying data-driven inference to nonlinear loop invariants is challenging due to the large numbers of and magnitudes of high-order terms, the potential for overfitting on a small number of samples, and the large space of possible inequality bounds. In this paper, we introduce a new neural architecture for general SMT learning, the Gated Continuous Logic Network (G-CLN), and apply it to nonlinear loop invariant learning. G-CLNs extend the Continuous Logic Network (CLN) architecture with gating units and dropout, which allow the model to robustly learn general invariants over large numbers of terms. To address overfitting that arises from finite program sampling, we introduce fractional sampling—a sound relaxation of loop semantics to continuous functions that facilitates unbounded sampling on real domain. We additionally design a new CLN activation function, the Piecewise Biased Quadratic Unit (PBQU), for naturally learning tight inequality bounds. We incorporate these methods into a nonlinear loop invariant inference system that can learn general nonlinear loop invariants. We evaluate our system on a benchmark of nonlinear loop invariants and show it solves 26 out of 27 problems, 3 more than prior work, with an average runtime of 53.3 seconds. We further demonstrate the generic learning ability of G-CLNs by solving all 124 problems in the linear Code2Inv benchmark. We also perform a quantitative stability evaluation and show G-CLNs have a convergence rate of 97.5% on quadratic problems, a 39.2% improvement over CLN models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Formal verification provides techniques for proving the correctness of programs, thereby eliminating entire classes of critical bugs. While many operations can be verified automatically, verifying programs with loops usually requires inferring a sufficiently strong loop invariant, which is undecidable in general (Hoare, 1969; Blass and Gurevich, 2001; Furia et al., 2014)

. Invariant inference systems are therefore based on heuristics that work well for loops that appear in practice. Data-driven loop invariant inference is one approach that has shown significant promise, especially for learning linear invariants

(Zhu et al., 2018; Si et al., 2018; Ryan et al., 2020). Data-driven inference operates by sampling program state across many executions of a program and trying to identify an Satisfiability Modulo Theories (SMT) formula that is satisfied by all the sampled data points.

However, verifying real-world programs often requires loop invariants with nonlinear constraints. This is especially true in programs that perform many numerical operations, such as control systems for avionics or industrial plants (Damm et al., 2005; Lin et al., 2014). Data-driven nonlinear invariant inference is fundamentally difficult because the space of possible nonlinear invariants is large, but sufficient invariants for verification must be inferred from a finite number of samples. In practice, this leads to three distinct challenges when performing nonlinear data-driven invariant inference: (i) Large search space with high magnitude terms. Learning nonlinear terms causes the space of possible invariants to grow quickly (i.e. polynomial expansion of terms grows exponentially in the degree of terms). Moreover, large terms such as or dominate the process and prevent meaningful invariants from being learned. (ii) Limited samples. Bounds on the number of loop iterations in programs with integer variables limit the number of possible samples, leading to overfitting when learning nonlinear invariants. (iii) Distinguishing sufficient inequalities. For any given finite set of samples, there are potentially infinite valid inequality bounds on the data. However, verification usually requires specific bounds that constrain the loop behavior as tightly as possible.

[// pre: (a >= 0) n=0; x=0; y=1; z=6; [// compute cube: while(n != a){    n += 1;    x += y;    y += z;    z += 6; } return x; [// post: x == a^3
(a) Loop for computing cubes that requires the invariant to infer its postcondition . A data-driven model must simultaneously learn a cubic constraint that changes by 1000s and a linear constraint that increments by 6.
[// pre: (n >= 0) a=0; s=1; t=1; [// compute sqrt: while (s <= n) {   a += 1;   t += 2;   s += t; } return a; [//post: a^2 <= n [//and n < (a+1)^2
(b) Loop for computing integer approximation to square root. The graph shows three valid inequality invariants, but only the tight quadratic inequality invariant is sufficient to verify that the final value of is between and .
Figure 1. Example problems demonstrating the challenges of nonlinear loop invariant learning.

Figure 0(a) and 0(b) illustrate the challenges posed by loops with many higher-order terms as well as nonlinear inequality bounds. The loop in Figure 0(a) computes a cubic power and requires the invariant to verify its postcondition . To infer this invariant, a typical data-driven inference system must consider 35 possible terms, ranging from to , only seven of which are contained in the invariant. Moreover, the higher-order terms in the program will dominate any error measure in fitting an invariant, so any data-driven model will tend to only learn the constraint . Figure 0(b) shows a loop for computing integer square root where the required invariant is to verify its postcondition. However, a data-driven model must identify this invariant from potentially infinite other valid but loosely fit inequality invariants.

Most existing methods for nonlinear loop invariant inference address these challenges by limiting either the structure of invariants they can learn or the complexity of invariants they can scale to. Polynomial equation solving methods such as Numinv and Guess-And-Check are able to learn equality constraints but cannot learn nonlinear inequality invariants (Nguyen et al., 2017; Sharma et al., 2013a). In contrast, template enumeration methods such as PIE can potentially learn arbitrary invariants but struggle to scale to loops with nonlinear invariants because space of possible invariants grows too quickly (Padhi et al., 2016).

In this paper, we introduce an approach that can learn general nonlinear loop invariants. Our approach is based on Continuous Logic Networks (CLNs), a recently proposed neural architecture that can learn SMT formulas directly from program traces (Ryan et al., 2020). CLNs use a parameterized relaxation that relaxes SMT formulas to differentiable functions. This allows CLNs to learn SMT formulas with gradient descent, but a template that defines the logical structure of the formula has to be manually provided.

We base our approach on three developments that address the challenges inherent in nonlinear loop invariant inference: First, we introduce a new neural architecture, the Gated Continuous Logic Network (G-CLN), a more robust CLN architecture that is not dependent on formula templates. Second, we introduce Fractional Sampling, a principled program relaxation for dense sampling. Third, we derive the Piecewise Biased Quadratic Unit (PBQU), a new CLN activation function for inequality learning. We provide an overview of these methods below.

Gated Continuous Logic Networks.

G-CLNs improve the CLN architecture by making it more robust and general. Unlike CLNs, G-CLNs are not dependent on formula templates for logical structure. We adapt three different methods from deep learning to make G-CLN training more stable and combat overfitting: gating, dropout, and batch normalization 

(Srivastava et al., 2014; Ioffe and Szegedy, 2015; Gers et al., 1999; Bahdanau et al., 2014). To force the model to learn a varied combination of constraints, we apply Term Dropout, which operates similarly to dropout in feedforward neural networks by zeroing out a random subset of terms in each clause. Gating makes the CLN architecture robust by allowing it to ignore subclauses that cannot learn satisfying coefficients for their inputs, due to poor weight initialization or dropout. To stabilize training in the presence of high magnitude nonlinear terms, we apply normalization to the input and weights similar to batch normalization.

By combining dropout with gating, G-CLNs are able to learn complex constraints for loops with many higher-order terms. For the loop in Figure 0(a), the G-CLN will set the or terms to zero in several subclauses during dropout, forcing the model to learn a conjunction of all three equality constraints. Clauses that cannot learn a satisfying set of coefficients due to dropout, i.e. a clause with only and but no term, will be ignored by a model with gating.

Fractional Sampling. When the samples from the program trace are insufficient to learn the correct invariant due to bounds on program behavior, we perform a principled relaxation of the program semantics to continuous functions. This allows us to perform Fractional Sampling, which generates samples of the loop behavior at intermediate points between integers. To preserve soundness, we define the relaxation such that operations retain their discrete semantics relative to their inputs but operate on the real domain, and any invariant for the continuous relaxation of the program must be an invariant for the discrete program. This allows us to take potentially unbounded samples even in cases where the program constraints prevent sufficient sampling to learn a correct invariant.

Piecewise Biased Quadratic Units. For inequality learning, we design a PBQU activation, which penalizes loose fits and converges to tight constraints on data. We prove this function will learn a tight bound on at least one point and demonstrate empirically that it learns precise bounds invariant bounds, such as the bound shown in Figure 0(b).

We use G-CLNs with Fractional Sampling and PBQUs to develop a unified approach for general nonlinear loop invariant inference. We evaluate our approach on a set of loop programs with nonlinear invariants, and show it can learn invariants for 26 out of 27 problems, 3 more than prior work, with an average runtime of 53.3 seconds. We also perform a quantitative stability evaluation and show G-CLNs have a convergence rate of on quadratic problems, a improvement over CLN models. We also test the G-CLN architecture on the linear Code2Inv benchmark (Si et al., 2018) and show it can solve all 124 problems.

In summary, this paper makes the following contributions:

  • We develop a new general and robust neural architecture, the Gated Continuous Logic Network (G-CLN), to learn general SMT formulas without relying on formula templates for logical structure.

  • We introduce Fractional Sampling, a method that facilitates sampling on the real domain by applying a principled relaxation of program loop semantics to continuous functions while preserving soundness of the learned invariants.

  • We design PBQUs, a new activation function for learning tight bounds inequalities, and provide convergence guarantees for learning a valid bound.

  • We integrate our methods in a general loop invariant inference system and show it solves 26 out 27 problems in a nonlinear loop invariant benchmark, 3 more than prior work. Our system can also infer loop invariants for all 124 problems in the linear Code2Inv benchmark.

The rest of the paper is organized as follows: In §2, we provide background on the loop invariant inference problem, differentiable logic, and the CLN neural architecture for SMT learning. Subsequently, we introduce the high-level workflow of our method in §3. Next, in §4, we formally define the gated CLN construction, relaxation for fractional sampling, and PBQU for inequality learning, and provide soundness guarantees for gating and convergence guarantees for bounds learning. We then provide a detailed description of our approach for nonlinear invariant learning with CLNs in §5. Finally we show evaluation results in §6 and discuss related work in §7 before concluding in §8.

2. Background

In this section, we provide a brief review of the loop invariant inference problem and then define the differentiable logic operators and the Continuous Logic Network architecture used in our approach.

2.1. Loop Invariant Inference

Loop invariants encapsulate properties of the loop which are independent of the iterations and enable verification to be performed over loops. For an invariant to be sufficient for verification, it must simultaneously be weak enough to be derived from the precondition and strong enough to conclude the post-condition. Formally, the loop invariant inference problem is, given a loop “while(LC) C,” a precondition , and a post-condition , we are asked to find an inductive invariant that satisfies the following three conditions:

where the inductive condition is defined using a Hoare triple.

Loop invariants can be encoded in SMT, which facilitates efficient checking of the conditions with solvers such as Z3 (De Moura and Bjørner, 2008; Biere et al., 2009). As such, our work focuses on inferring likely candidate invariant as validating a candidate can be done efficiently.

Data-driven Methods. Data-driven loop invariant inference methods use program traces recording the state of each variable in the program on every iteration of the loop to guide the invariant generation process. Since an invariant must hold for any valid execution, the collected traces can be used to rule out many potential invariants. Formally, given a set of program traces , data-driven invariant inference finds SMT formulas such that:

2.2. Basic Fuzzy Logic

Our approach to SMT formula learning is based on a form of differentiable logic called Basic Fuzzy Logic (BL). BL is a relaxation of first-order logic that operates on continuous truth values on the interval instead of on boolean values. BL uses a class of functions called t-norms (), which preserves the semantics of boolean conjunctions on continuous truth values. T-norms are required to be consistent with boolean logic, monotonic on their domain, commutative, and associative (Hájek, 2013). Formally, a t-norm is defined such that:

  • is consistent for any :

  • is commutative and associative for any :

  • is monotonic (nondecreasing) for any :

BL additionally requires that t-norms be continuous. T-conorms () are derived from t-norms via DeMorgan’s law and operate as disjunctions on continuous truth values, while negations are defined .

In this paper, we keep t-norms abstract in our formulations to make the framework general. Prior work (Ryan et al., 2020) found product t-norm perform better in Continuous Logic Networks. For this reason, we use product t-norm in our final implementation, although other t-norms (e.g., Godel) can also be used.

2.3. Continuous Logic Networks

We perform SMT formula learning with Continuous Logic Networks (CLNs), a neural architecture introduced in (Ryan et al., 2020) that are able to learn SMT formulas directly from data. These can be used to learn loop invariants from the observed behavior of the program.

CLNs are based on a parametric relaxation of SMT formulas that maps the SMT formulation from boolean first-order logic to BL. The model defines the operator . Given an quantifier-free SMT formula , maps it to a continuous function . In order for the continuous model to be both usable in gradient-guided optimization while also preserving the semantics of boolean logic, it must fulfill three conditions:

  1. It must preserve the meaning of the logic, such that the continuous truth values of a valid assignment are always greater than the value of an invalid assignment:

  2. It must be must be continuous and smooth (i.e. differentiable almost everywhere) to facilitate training.

  3. It must be strictly increasing as an unsatisfying assignment of terms approach satisfying the mapped formula, and strictly decreasing as a satisfying assignment of terms approach violating the formula.

is constructed as follows to satisfy these requirements. The logical relations are mapped to their continuous equivalents in BL:

Conjunction:
Disjunction:
Negation:

where any is an SMT formula. defines SMT predicates with functions that map to continuous truth values. This mapping is defined for using sigmoids with a shift parameter and smoothing parameter :

Greater Than:
Greater or Equal to:

where . Mappings for other predicates are derived from their logical relations to :

Less Than:
Less or Equal to:
Equality:
Inequality:

Using these definitions the parametric relaxation satisfies all three conditions for sufficiently large and sufficiently small . Based on this parametric relaxation , we build a Continuous Logic Network model , which is a computational graph of with learnable parameters . When training a CLN, loss terms are applied to penalize small , ensuring that as the loss approaches 0 the CLN will learn a precise formula. Under these conditions, the following relationship holds between a trained CLN model with coefficients and its associated formula for a given set of data points, :

Figure 2 shows an example CLN for the formula on a single variable :

Figure 2. Plot of the formula and its associated CLN .

3. Workflow

Figure 3. Overview of method consisting of 3 phases: trace generation from source code file, G-CLN training, and invariant extraction followed by checking with Z3.
[// pre: (n >= 0)
a=0; s=1; t=1;
while (s<=n){
  log(a,s,t,n);
  a += 1;
  t += 2;
  s += t;
}
log(a,s,t,n);
(a) Program instru-
mented to log samples.
1 a t
1 0 1 0 1 1
1 1 3 4 9 12
1 2 5 18 25 45
1 3 7 48 49 112
(b) Sample data points generated with maximum degree of 2.
Figure 4. Training data generation for the program shown in Figure 0(b).

Figure 3 illustrates our overall workflow for loop invariant inference. Our approach has three stages: (i) We first instrument the program and execute to generate trace data. (ii) We then construct and train a G-CLN model to fit the trace data. (iii) We extract a candidate loop invariant from the model and check it against a specification.

Given a program loop, we modify it to record variables for each iteration and then execute the program to generate samples. Figure 3(a) illustrates this process for the sqrt program from Figure 0(b). The program has the input with precondition , so we execute with values for inputs in a set range. Then we expand the samples to all candidate terms for the loop invariant. By default, we enumerate all the monomials over program variables up to a given degree , as shown in Figure 3(b). Our system can be configured to consider other non-linear terms like .

We then construct and train a G-CLN model using the collected trace data. We use the model architecture described in §5.2.1, with PBQUs for bounds learning using the procedure in §5.2.2. After training the model, the SMT formula for the invariant is extracted by recursively descending through the model and extracting clauses whose gating parameters are above 0.5, as outlined in Algorithm 1. On the sqrt program, the model will learn the invariant .

Finally, if z3 returns a counterexample, we will incorporate it into the training data, and rerun the three stages with more generated samples. Our system repeats until a valid invariant is learned or times out.

4. Theory

In this section, we first present our gating construction for CLNs and prove gated CLNs are sound with regard to their underlying discrete logic. We then describe Piecewise Biased Quadratic Units, a specific activation function construction for learning tight bounds on inequalities, and provide theoretical guarantees. Finally we present a technique to relax loop semantics and generate more samples when needed.

4.1. Gated t-norms and t-conorms

In the original CLNs (Ryan et al., 2020), a formula template is required to learn the invariant. For example, to learn the invariant , we have to provide the template , which can be constructed as a CLN model to learn the coefficients. So, we have to know in advance whether the correct invariant is an atomic clause, a conjunction, a disjunction, or a more complex logical formula. To tackle this problem, we introduce gated t-norms and gated t-conorms.

Given a classic t-norm , we define its associated gated t-norm as

Here are gate parameters indicating if and are activated, respectively. The following equation shows the intuition behind gated t-norms.

Informally speaking when , the input is activated and behaves as in the classic t-norm. When , is deactivated and discarded. When , the value of indicates how much information we should take from . This pattern also applies for and .

We can prove that , the gated t-norm is continuous and monotonically increasing with regard to and , thus being well suited for training.

Like the original t-norm, the gated t-norm can be easily extended to more than two operands. In the case of three operands, we have the following:

Using De Morgan’s laws , we define gated t-conorm as

Similar to gated t-norms, gated t-conorms have the following property.

Now we replace the original t-norms and t-conorms in CLN with our gated alternatives, which we diagram in Figure 5. Figure 6 demonstrates a gated CLN for representing an SMT formula. With the gated architecture, the gating parameters for each gated t-norm or gated t-conorm are made learnable during model training, such that the model can decide which input should be adopted and which should be discarded from the training data. This improves model flexibility and does not require a specified templates.

Figure 5. Example of gated t-norm with three operands constructed from binary t-norms. The gated t-conorm is done similarly.
Figure 6. An instance of gated CLN. “+” means activated (g=1) and “-” means deactivated (g=0). The SMT formula learned is .

Now, we formally state the procedure to retrieve the SMT formula from a gated CLN model recursively in Algorithm 1. Abusing notation for brevity, in line 1 represent the output node of model rather than the model itself, and the same applies for line 8 and line 15. in line 18 is a subroutine to extract the formula for a model with no logical connectives (e.g., retrieving in Figure 6). The linear weights which have been learned serve as the coefficients for the terms in the equality or inequality depending on the associated activation function. Finally, we need to round the learned coefficients to integers. We first scale the coefficients so that the maximum is and then round to the nearest rational number using a maximum possible denominator. We check if each rounded invariant fits all the training data and discard the invalid ones.

Input: A gated CLN model , with input nodes and output node .

Output: An SMT formula

Procedure ExtractFormula()

1:  if  then
2:     F := True
3:     for  to  do
4:        if  then
5:           
6:  else if  then
7:     F := False
8:     for  to  do
9:        if  then
10:           
11:  else if  then
12:     
13:  else
14:     
Algorithm 1 Formula Extraction Algorithm.

In Theorem 4.1, we will show that the extracted SMT formula is equivalent to the gated CLN model under some constraints. We first introduce a property of t-norms that is defined in the original CLNs (Ryan et al., 2020).

Property 1. .

The product t-norm , which is used in our implementation, has this property.

Note that the hyperparameters

in Theorem 4.1 will be formally introduced in §4.2 and are unimportant here. One can simply see

as the model output .

Theorem 4.1 ().

For a gated CLN model with input nodes and output node , if all gating parameters are either 0 or 1, then using the formula extraction algorithm, the recovered SMT formula is equivalent to the gated CLN model . That is, ,

(1)
(2)

as long as the t-norm in satisfies Property 1.

Proof.

We prove this by induction over the formula structure considering four cases: atomic, negation, T-norm, and T-conorm. For brevity, we sketch the T-norm case here and provide the full proof in Appendix A.

T-norm Case. If , which means the final operation in is a gated t-norm, we know that for each submodel the gating parameters are all either 0 or 1. By the induction hypothesis, for each , using Algorithm 1, we can extract an equivalent SMT formula satisfying Eq. (1)(2). Then we can prove the full model and the extracted formula also satisfy Eq. (1)(2), using the induction hypothesis and the properties and t-norms. ∎

The requirement of all gating parameters being either 0 or 1 indicates that no gate is partially activated (e.g., ). Gating parameters between 0 and 1 are acceptable during model fitting but should be eliminated when the model converges. In practice this is achieved by gate regularization which will be discussed in §5.2.1.

Theorem 4.1 guarantees the soundness of the gating methodology with regard to discrete logic. Since the CLN architecture is composed of operations that are sound with regard to discrete logic, this property is preserved when gated t-norms and t-conorms are used in the network.

Now the learnable parameters of our gated CLN include both linear weights as in typical neural networks, and the gating parameters , so the model can represent a large family of SMT formulas. Given a training set , when the gated CLN model is trained to for all , then from Theorem 4.1 the recovered formula is guaranteed to hold true for all the training samples. That is, .

4.2. Parametric Relaxation for Inequalities

For learned inequality constraints to be useful in verification, they usually need to constrain the loop behavior as tightly as possible. In this section, we define a CLN activation function, the bounds learning activation, which naturally learns tight bounds during training while maintaining the soundness guarantees of the CLN mapping to SMT.

(3)

Here and are two constants. The following limit property holds.

Intuitively speaking, when approaches 0 and approaches infinity, will approach the original semantic of predicate . Figure 6(b) provides an illustration of our parametric relaxation for .

Compared with the sigmoid construction in the original CLNs (Figure 6(a)), our parametric relaxation penalizes very large , where is absolutely correct but not very informative because the bound is too weak. In general, our piecewise mapping punishes data points farther away from the boundary, thus encouraging to learn a tight bound of the samples. On the contrary, the sigmoid construction encourages samples to be far from the boundary, resulting in loose bounds which are not useful for verification.

(a) Plot of with the CLNs’ sigmoid construction.
(b) Plot of with our piecewise construction.
Figure 7. Comparison of the mapping on . The hyperparameters are , , , and

Since the samples represent only a subset of reachable states from the program, encouraging a tighter bound may potentially lead to overfitting. However, we ensure soundness by later checking learned invariants via a solver. If an initial bound is too tight, we can incorporate counterexamples to the training data. Our empirical results show this approach works well in practice.

Given a set of -dimensional samples , where denotes the value of variable in the -th sample, we want to learn an inequality for these samples. The desirable properties of such an inequality is that it should be valid for all points, and have as tight as a fit as possible. Formally, we define a “desired” inequality as:

(4)

Our parametric relaxation for shown in Eq. 3 can always learn an inequality which is very close to a “desired” one with proper and . Theorem 4.2 put this formally.

Theorem 4.2 ().

Given a set of -dimensional samples with the maximum L2-norm , if and , and the weights are constrained as , then when the model converges, the learned inequality has distance at most from a “desired” inequality.

Proof.

See Appendix B. ∎

Recall that is a small constant, so can be considered as the error bound of inequality learning. Although we only proved the theoretical guarantee when learning a single inequality, our parametric relaxation for inequalities can be connected with other inequalities and equalities with conjunctions and disjunctions under a single CLN model.

Based on our parametric relaxation for , other inequality predicates can be defined accordingly.

where is a set small constant.

For our parametric relaxation, some axioms in classic logic just approximately rather than strictly hold (e.g., )). They will strictly hold when and .

We reuse the Gaussian function as the parametric relaxation for equalities (Ryan et al., 2020). Given a small constant ,

4.3. Fractional Sampling

In some cases, the samples generated from the original program are insufficient to learn the correct invariant due to dominating growth of some terms (higher-order terms in particular) or limited number of loop iterations. To generate more fine-grained yet valid samples, we perform Fractional Sampling to relax the program semantics to continuous functions without violating the loop invariants by varying the initial value of program variables. The intuition is as follows.

Any numerical loop invariant can be viewed as a predicate over program variables initialized with such that

(5)

where means starting from initial values and executing the loop for 0 or more iterations ends with values for the variables.

Now we relax the initial values and see them as input variables , which may carry arbitrary values. The new loop program will have variables . Suppose we can learn an invariant predicate for this new program, i.e.,

(6)

Then let , Eq. (6) will become

(7)

Now is a constant, and satisfies Eq. (5) thus being a valid invariant for the original program. In fact, if we learn predicate successfully then we have a more general loop invariant that can apply for any given initial values.

Figure 8 shows how Fractional Sampling can generate more fine-grained samples with different initial values, making model fitting much easier in our data-driven learning system. The correct loop invariant for the program in Figure 7(a) is

To learn the equality part , if we choose and apply normal sampling, then six terms will remain after the heuristic filters in §5.1.3. Figure 7(b) shows a subset of training samples without Fractional Sampling (the column of term 1 is omitted).

[//pre: x = y = 0
[//     /\ k >= 0
while (y < k) {
  y++;
  x += y * y * y;
}
[//post: 4x == k^2
[//    * (k + 1)^2
(a) The ps4 program
in the benchmark.
x y
0 0 0 0 0
1 1 1 1 1
9 2 4 8 16
36 3 9 27 81
100 4 16 64 256
225 5 25 125 625
(b) Training data generated
without Fractional Sampling.
x y
-1 -0.6 0.36 -0.22 0.13 -1 -0.6 0.36 -0.22 0.13
-0.9 0.4 0.16 0.06 0.03 -1 -0.6 0.36 -0.22 0.13
1.8 1.4 1.96 2.74 3.84 -1 -0.6 0.36 -0.22 0.13
0 -1.2 1.44 -1.73 2.07 0 -1.2 1.44 -1.73 2.07
0 -0.2 0.04 -0.01 0.00 0 -1.2 1.44 -1.73 2.07
0.5 0.8 0.64 0.52 0.41 0 -1.2 1.44 -1.73 2.07
(c) Training data generated with fractional sampling.
Figure 8. An example of Fractional Sampling.

When becomes larger, the low order terms and become increasingly negligible because they are significantly smaller than the dominant terms and . In practice we observe that the coefficients for and can be learned accurately but not for . To tackle this issue, we hope to generate more samples around where all terms are on the same level. Such samples can be easily generated by feeding more initial values around using Fractional Sampling. Table 7(c) shows some generated samples from and .

Now we have more samples where terms are on the same level, making the model easier to converge to the accurate solution. Our gated CLN model can correctly learn the relaxed invariant . Finally we return to the exact initial values , and the correct invariant for the original program will appear .

Note that for convenience, in Eq. (5)(6)(7), we assume all variables are initialized in the original program and all are relaxed in the new program. However, the framework can easily extends to programs with uninitialized variables, or we just want to relax a subset of initialized variables. Details on how fractional sampling is incorporated in our system are provided in §5.4.

5. Nonlinear Invariant Learning

In this section, we describe our overall approach for nonlinear loop invariant inference. We first describe our methods for stable CLN training on nonlinear data. We then give an overview of our model architecture and how we incorporate our inequality activation function to learn inequalities. Finally, we show how we extend our approach to also learn invariants that contain external functions.

5.1. Stable CLN Training on Nonlinear Data

Nonlinear data causes instability in CLN training due to the large number of terms and widely varying magnitudes it introduces. We address this by modifying the CLN architecture to normalize both inputs and weights on a forward execution. We then describe how we implement term dropout, which helps the model learn precise SMT coefficients.

5.1.1. Data Normalization

Exceedingly large inputs cause instability and prevent the CLN model from converging to precise SMT formulas that fit the data. We therefore modify the CLN architecture such that it rescales its inputs so the L2-norm equals a set value . In our implementation, we used .

1 a t
0.70 0 0.70 0 0.70 0.70
0.27 0.27 0.81 1.08 2.42 3.23
0.13 0.25 0.63 2.29 3.17 5.71
0.06 0.19 0.45 3.10 3.16 7.23
Table 1. Training data after normalization for the program in Figure 0(b), which computes the integer square root.

We take the program in Figure 0(b) as an example. The raw samples before normalization is shown in Figure 3(b). The monomial terms span in a too wide range, posing difficulty for network training. With data normalization, each sample (i.e., each row) is proportionally rescaled to L2-norm 10. The normalized samples are shown in Table 1.

Now the samples occupy a more regular range. Note that data normalization does not violate model correctness. If the original sample satisfies the equality (note that can be a higher-order term), so does the normalized sample and vice versa. The same argument applies to inequalities.

5.1.2. Weight Regularization

For both equality invariant and inequality invariant , is a true solution. To avoid learning this trivial solution, we require at least one of is non-zero. A more elegant way is to constrain the

-norm of the weight vector to constant 1. In practice we choose L2-norm as we did in Theorem

4.2. The weights are constrained to satisfy

5.1.3. Term Dropout

Given a program with three variables and , we will have ten candidate terms . The large number of terms poses difficulty for invariant learning, and the loop invariant in a real-world program is unlikely to contain all these terms. We use two methods to select terms. First the growth-rate-based heuristic in (Sharma et al., 2013a) is adopted to filter out unnecessary terms. Second we apply a random dropout to discard terms before training.

Dropout is a common practice in neural networks to avoid overfitting and improve performance. Our dropout is randomly predetermined before the training, which is different from the typical weight dropout in deep learning (Srivastava et al., 2014). Suppose after the growth-rate-based filter, seven terms

remain. Before the training, each input term to a neuron may be discarded with probability

.

The importance of dropout is twofold. First it further reduces the number of terms in each neuron. Second it encourages G-CLN to learn more simple invariants. For example, if the desired invariant is , then a neuron may learn their linear combination (e.g., ) which is correct but not human-friendly. If the term is discarded in one neuron then that neuron may learn rather than . Similarly, if the terms and are discarded in another neuron, then that neuron may learn . Together, the entire network consisting of both neurons will learn the precise invariant.

Since the term dropout is random, a neuron may end up having no valid invariant to learn (e.g., both and are discarded in the example above). But when gating (§4.1) is adopted, this neuron will be deactivated and remaining neurons may still be able to learn the desired invariant. More details on gated model training will be provided in §5.2.1.

5.2. Gated CLN Invariant Learning

Here we describe the Gated CLN architecture and how we incorporate the bounds learning activation function to learn general nonlinear loop invariants.

5.2.1. Gated CLN Architecture

Figure 9. Diagram of G-CLN model. Additional disjunction and conjunction layers may be added to learn more complex SMT formulas.

Architecture. In §3.1, we introduce gated t-norm and gated t-conorm, and illustrate how they can be integrated in CLN architecture. Theoretically, the gates can cascade to many layers, while in practice, we use a gated t-conorm layer representing logical OR followed by a gated t-norm layer representing logical AND, as shown in Figure 9. The SMT formula extracted from such a gated CLN architecture will be in conjunctive normal form (CNF). In other words, G-CLN is parameterized by m and n, where the underlying formula can be a conjunction of up to m clauses, where each clause is a disjunction of n atomic clauses. In the experiments we set m=10, n=2.

Gated CLN Training. For the sake of discussion, consider a gated t-norm with two inputs. We note that the gating parameters and have an intrinsic tendency to become 0 in our construction. When , regardless of the truth value of the inputs and . So when training the gated CLN model, we apply regularization on to penalize small values. Similarly, for a gated t-conorm, the gating parameters have an intrinsic tendency to become 1 because has a greater value than and . To resolve this we apply regularization pressure on to penalize close-to-1 values.

In the general case, given training set and gate regularization parameters

, the model will learn to minimize the following loss function with regard to the linear weights

and gating parameters ,

By training the G-CLN with this loss formulation, the model tends to learn a formula satisfying each training sample (recall in §4.1). Together, gating and regularization prunes off poorly learned clauses, while preventing the network from pruning off too aggressively. When the training converges, all the gating parameters will be very close to either 1 or 0, indicating the participation of the clause in the formula. The invariant is recovered using Algorithm 1.

5.2.2. Inequality Learning

(a) Learned inequality bounds.
(b) Learned inequality bounds on sqrt.
Figure 10. Examples of 2 dimensional bound fitting.

Inequality learning largely follows the same procedure as equality learning with two differences. First, we use the PBQU activation (i.e., the parametric relaxation for ) introduced in §4.2, instead of the Gaussian activation function (i.e., the parametric relaxation for ). This difference is shown in Figure 9. As discussed in §4.2, the PBQU activation will learn tight inequalities rather than loose ones.

Second, we structure the dropout on inequality constraints to consider all possible combinations of variables up to a set number of terms and maximum degree (up to 3 terms and 2nd degree in our evaluation). We then train the model following the same optimization used in equality learning, and remove constraints that do not fit the data based on their PBQU activations after the model has finished training.

When extracting a formula from the model we remove poorly fit learned bounds that have PBQU activations below a set threshold. As discussed in §4.2, PBQU activations penalizes points that are farther from its bound. The tight fitting bounds in Figures 9(a) and 9(b) with solid red lines have PBQU activations close to 1, while loose fitting bounds with dashed lines have PBQU activations close to 0. After selecting the best fitting bounds, we check against the loop specification and remove any remaining constraints that are unsound. If the resulting invariant is insufficient to prove the postcondition, the model is retrained using the counterexamples generated during the specification check.

5.3. External Function Calls

In realistic applications, loops are not entirely whitebox and may contain calls to external functions for which the signature is provided but not the code body. In these cases, external functions may also appear in the loop invariant. To address these cases, when an external function is present, we sample it during loop execution. To sample the function, we execute it with all combinations of variables in scope during sampling that match its call signature.

For example, the function , for greatest common divisor, is required in the invariant for four of the evaluation problems that compute either greatest common divisor or least common multiple: (egcd2, egcd3, lcm1, and lcm2). In practice, we constrain our system to binary functions, but it is not difficult to utilize static-analysis to extend the support to more complex external function calls. This procedure of constructing terms containing external function calls is orthogonal to our training framework.

5.4. Fractional Sampling Implementation

We apply fractional sampling on a per-program basis when we observe the model is unable to learn the correct polynomial from the initial samples. We first sample on intervals, then , etc. until the model learns a correct invariant. We do not apply fractional sampling to variables involved in predicates and external function calls, such as gcd. In principle, predicate constraints can be relaxed to facilitate more general sampling. We will investigate this in future work.

Among all the programs in our evaluation, only two of them, ps5 and ps6, require fractional sampling. For both of them sampling on intervals is sufficient to learn the correct invariant, although more fine grained sampling helps the model learn a correct invariant more quickly. The cost associated with fractional sampling is small ().

6. Evaluation

We evaluate our approach on NLA, a benchmark of common numerical algorithms with nonlinear invariants. We first perform a comparison with two prior works, Numinv and PIE, that use polynomial equation solving and template learning respectively. We then perform an ablation of the methods we introduce in this paper. Finally, we evaluate the stability of our approach against a baseline CLN model.

Evaluation Environment. The experiments described in this section were conducted on an Ubuntu 18.04 server with an Intel XeonE5-2623 v4 2.60GHz CPU, 256Gb of memory, and an Nvidia GTX 1080Ti GPU.

System Configuration.

We implement our method with the PyTorch Framework and use the Z3 SMT solver to validate the correctness of the inferred loop invariants.

For the four programs involving greatest common divisors, we manually check the validity of the learned invariant since gcd is not supported by z3. We use a G-CLN model with the CNF architecture described in §5.2.1, with a conjunction of ten clauses, each with up to two literals. We use adaptive regularization on the CLN gates. is set to , which means that is initialized as , and is multiplied by

after each epoch until it reaches the threshold

. Similarly, is set to . We try three maximum denominators, , for coefficient extraction in §4.1. For the parametric relaxation in §4.2, we set . The default dropout rate in §5.1.3 is , and will decrease by after each failed attempt until it reaches . We use the Adam optimizer with learning rate , decay , and max epoch .

6.1. Polynomial Invariant Dataset

We evaluate our method on a dataset of programs that require nonlinear polynomial invariants (Nguyen et al., 2012a). The problems in this dataset represent various numerical algorithms ranging from modular division and greatest common denominator () to computing geometric and power series. These algorithms involve up to triply nested loops and sequential loops, which we handle by predicting all the requisite invariants using the model before checking their validity. We sample within input space of the whole program just as we do with single loop problems and annotate the loop that a recorded state is associated with. The invariants involve up to order polynomial and up to thirteen variables.

Problem Degree # Vars PIE
NumInv
G-CLN
divbin 2 5 -
cohendiv 2 6 -
mannadiv 2 5
hard 2 6 -
sqrt1 2 4 -
dijkstra 2 5 -
cohencu 3 5 -
egcd 2 8 -
egcd2 2 11 -
egcd3 2 13 -
prodbin 2 5 -
prod4br 3 6
fermat1 2 5 -
fermat2 2 5 -
freire1 2 3 -
freire2 3 4 -
knuth 3 8 -
lcm1 2 6 -
lcm2 2 6
geo1 2 5
geo2 2 5
geo3 3 6
ps2 2 4
ps3 3 4
ps4 4 4
ps5 5 4 -
ps6 6 4 -
Table 2. Table of problems requiring nonlinear polynomial invariant from NLA dataset. We additionally tested Code2Inv on the same problems as PIE and it fails to solve any within 1 hour. Numinv results are based on Table 1 in (Nguyen et al., 2017). G-CLN solves 26 of 27 problems with an average execution time of 53.3 seconds.
Problem Data Weight
Drop-
Frac.
Full
Norm. Reg. out
Sampling
Method
divbin
cohendiv
mannadiv
hard
sqrt1
dijkstra
cohencu
egcd
egcd2
egcd3
prodbin
prod4br
fermat1
fermat2
freire1
freire2
knuth
lcm1
lcm2
geo1
geo2
geo3
ps2
ps3
ps4
ps5
ps6
Table 3. Table with ablation of various components in the G-CLN model. Each column reports which problems can be solved with G-CLN when that feature ablated.

Performance Comparison. Our method is able to solve 26 of the 27 problems as shown in Table 2, while NumInv solves 23 of 27. Our average execution time was 53.3 seconds, which is a minor improvement to NumInv who reported 69.9 seconds. We also evaluate LoopInvGen (PIE) on a subset of the simpler problems which are available in a compatible format111LoopInvGen uses the SyGuS competition format, which is an extended version of smtlib2.. It was not able to solve any of these problems before hitting a 1 hour timeout. In Table 2, we indicate solved problems with ✓, unsolved problems with ✗, and problems that were not attempted with .

The single problem we do not solve, knuth, is the most difficult problem from a learning perspective. The invariant for the problem, , is one of the most complex in the benchmark. Without considering the external function call to (modular division), there are already 165 potential terms of degree at most 3, nearly twice as many as next most complex problem in the benchmark, making it difficult to learn a precise invariant with gradient based optimization. We plan to explore better initialization and training strategies to scale to complex loops like knuth in future work.

Numinv is able to find the equality constraint in this invariant because its approach is specialized for equality constraint solving. However, we note that NumInv only infers octahedral inequality constraints and does not in fact infer the nonlinear and 3 variable inequalities in the benchmark.

We handle the binary function successfully in fermat1 and fermat2 indicting the success of our model in supporting external function calls. Additionally, for four problems (egcd2, egcd3, lcm1, and lcm2), we incorporate the external function call as well.

6.2. Ablation Study

We conduct an ablation study to demonstrate the benefits gained by the normalization/regularization techniques and term dropouts as well as fractional sampling. Table 3 notes that data normalization is crucial for nearly all the problems, especially for preventing high order terms from dominating the training process. Without weight regularization, the problems which involve inequalities over multiple variables cannot be solved; 7 of the 27 problems cannot be solved without dropouts, which help avoid the degenerate case where the network learns repetitions of the same atomic clause. Fractional sampling helps to solve high degree ( and order) polynomials as the distance between points grow fast.

6.3. Stability

We compare the stability of the gated CLNs with standard CLNs as proposed in (Ryan et al., 2020). Table 4 shows the result. We ran the two CLN methods without automatic restart 20 times per problem and compared the probability of arriving at a solution. We tested on the example problems described in (Ryan et al., 2020) with disjunction and conjunction of equalities, two problems from Code2Inv, as well as ps2 and ps3 from NLA. As we expected, our regularization and gated t-norms vastly improves the stability of the model as clauses with poorly initialized weights can be ignored by the network. We saw improvements across all the six problems, with the baseline CLN model having an average convergence rate of , and the G-CLN converging of the time on average.

6.4. Linear Invariant Dataset

We evaluate our system on the Code2Inv benchmark (Si et al., 2018) of 133 linear loop invariant inference problems with source code and SMT loop invariant checks. We hold out 9 problems shown to be theoretically unsolvable in (Ryan et al., 2020). Our system finds correct invariants for all remaining 124 theoretically solvable problems in the benchmark in under 30s.

Problem Convergence Rate Convergence Rate
of CLN of G-CLN
Conj Eq 75% 95%
Disj Eq 50% 100%
Code2Inv 1 55% 90%
Code2Inv 11 70% 100%
ps2 70% 100%
ps3 30% 100%
Table 4. Table comparing the stability of CLN2INV with our method. The statistics reported are over 20 runs per problem with randomized initialization.

7. Related Work

Numerical Relaxations.

Inductive logic programming (ILP) has been used to learn a logical formula consistent with a set of given data points. More recently, efforts have focused on differentiable relaxations of ILP for learning

(Kimmig et al., 2012; Yang et al., 2017; Evans and Grefenstette, 2018; Payani and Fekri, 2019) or program synthesis (Si et al., 2019). Other recent efforts have used formulas as input to graph and recurrent nerual networks to solve Circuit SAT problems and identify Unsat Cores (Amizadeh et al., 2019; Selsam et al., 2019; Selsam and Bjørner, 2019). FastSMT also uses a neural network select optimal SMT solver strategies (Balunovic et al., 2018). In contrast, our work relaxes the semantics of the SMT formulas allowing us to learn SMT formulas.

Counterexample-Driven Invariant Inference.

There is a long line of work to learn loop invariants based on counterexamples. ICE-DT uses decision tree learning and leverages counterexamples which violate the inductive verification condition

(Garg et al., 2016; Zhu et al., 2018; Garg et al., 2014)

. Combinations of linear classifiers have been applied to learning CHC clauses

(Zhu et al., 2018).

A state-of-the-art method, LoopInvGen (PIE) learns the loop invariant using enumerative synthesis to repeatedly add data consistent clauses to strengthen the post-condition until it becomes inductive (Sharma et al., 2013b; Padhi et al., 2016; Padhi and Millstein, 2017). For the strengthening procedure, LoopInvGen uses PAC learning, a form of boolean formula learning, to learn which combination of candidate atomic clauses is consistent with the observed data. In contrast, our system learn invariants from trace data.

Neural Networks for Invariant Inference.

Recently, neural networks have been applied to loop invariant inference. Code2Inv combines graph and recurrent neural networks to model the program graph and learn from counterexamples 

(Si et al., 2018). In contrast, CLN2INV uses CLNs to learn SMT formulas for invariants directly from program data (Ryan et al., 2020). We also use CLNs but incorporate gating and other improvements to be able to learn general nonlinear loop invariants.

Polynomial Invariants. There have been efforts to utilize abstract interpretation to discover polynomial invariants (Rodríguez-Carbonell and Kapur, 2007, 2004). More recently, Compositional Recurrance Analysis (CRA) performs analysis on abstract domain of transition formulas, but relies on over approximations that prevent it from learning sufficient invariants (Farzan and Kincaid, 2015; Kincaid et al., 2017a, b). Data-driven methods based on linear algebra such as Guess-and-Check are able to learn polynomial equality invariants accurately (Sharma et al., 2013a). Guess-and-check learns equality invariants by using the polynomial kernel, but it cannot learn disjunctions and inequalities, which our framework supports natively.

NumInv (Nguyen et al., 2012b, 2014, 2017) uses the polynomial kernel but also learns octahedral inequalities. NumInv sacrifices soundness for performance by replacing Z3 with KLEE, a symbolic executor, and in particular, treats invariants which lead to KLEE timeouts as valid. Our method instead is sound and learns more general inequalities than NumInv.

8. Conclusion

We introduce G-CLNs, a new gated neural architecture that can learn general nonlinear loop invariants. We additionally introduce Fractional Sampling, a method that soundly relaxes program semantics to perform dense sampling, and PBQU activations, which naturally learn tight inequality bounds for verification. We evaluate our approach on a set of 27 polynomial loop invariant inference problems and solve 26 of them, 3 more than prior work, as well as improving convergence rate to on quadratic problems, a improvement over CLN models.

Acknowledgements

The authors are grateful to our shepherd, Aditya Kanade, and the anonymous reviewers for valuable feedbacks that improved this paper significantly. This work is sponsored in part by NSF grants CNS-18-42456, CNS-18-01426, CNS-16-17670, CCF-1918400; ONR grant N00014-17-1-2010; an ARL Young Investigator (YIP) award; an NSF CAREER award; a Google Faculty Fellowship; a Capital One Research Grant; a J.P. Morgan Faculty Award; a Columbia-IBM Center Seed Grant Award; and a Qtum Foundation Research Gift. Any opinions, findings, conclusions, or recommendations that are expressed herein are those of the authors, and do not necessarily reflect those of the US Government, ONR, ARL, NSF, Google, Capital One J.P. Morgan, IBM, or Qtum.

References

  • S. Amizadeh, S. Matusevych, and M. Weimer (2019) Learning to solve circuit-SAT: an unsupervised differentiable approach. In International Conference on Learning Representations, External Links: Link Cited by: §7.
  • D. Bahdanau, K. Cho, and Y. Bengio (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: §1.
  • M. Balunovic, P. Bielik, and M. Vechev (2018) Learning to solve smt formulas. In Advances in Neural Information Processing Systems, pp. 10317–10328. Cited by: §7.
  • A. Biere, H. van Maaren, and T. Walsh (2009)

    Handbook of satisfiability: volume 185 frontiers in artificial intelligence and applications

    .
    IOS Press, Amsterdam, The Netherlands, The Netherlands. External Links: ISBN 1586039296, 9781586039295 Cited by: §2.1.
  • A. Blass and Y. Gurevich (2001) Inadequacy of computable loop invariants. ACM Transactions on Computational Logic (TOCL) 2 (1), pp. 1–11. Cited by: §1.
  • W. Damm, G. Pinto, and S. Ratschan (2005) Guaranteed termination in the verification of ltl properties of non-linear robust discrete time hybrid systems. In International Symposium on Automated Technology for Verification and Analysis, pp. 99–113. Cited by: §1.
  • L. De Moura and N. Bjørner (2008) Z3: an efficient smt solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 337–340. Cited by: §2.1.
  • R. Evans and E. Grefenstette (2018) Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research 61, pp. 1–64. Cited by: §7.
  • A. Farzan and Z. Kincaid (2015) Compositional recurrence analysis. In 2015 Formal Methods in Computer-Aided Design (FMCAD), pp. 57–64. Cited by: §7.
  • C. A. Furia, B. Meyer, and S. Velder (2014) Loop invariants: analysis, classification, and examples. ACM Computing Surveys (CSUR) 46 (3), pp. 34. Cited by: §1.
  • P. Garg, C. Löding, P. Madhusudan, and D. Neider (2014) ICE: a robust framework for learning invariants. In International Conference on Computer Aided Verification, pp. 69–87. Cited by: §7.
  • P. Garg, D. Neider, P. Madhusudan, and D. Roth (2016) Learning invariants using decision trees and implication counterexamples. In ACM Sigplan Notices, Vol. 51, pp. 499–512. Cited by: §7.
  • F. A. Gers, J. Schmidhuber, and F. Cummins (1999) Learning to forget: continual prediction with lstm. Cited by: §1.
  • P. Hájek (2013) Metamathematics of fuzzy logic. Vol. 4, Springer Science & Business Media. Cited by: §2.2.
  • C. A. R. Hoare (1969) An axiomatic basis for computer programming. Communications of the ACM 12 (10), pp. 576–580. Cited by: §1.
  • S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §1.
  • A. Kimmig, S. Bach, M. Broecheler, B. Huang, and L. Getoor (2012) A short introduction to probabilistic soft logic. In Proceedings of the NIPS Workshop on Probabilistic Programming: Foundations and Applications, pp. 1–4. Cited by: §7.
  • Z. Kincaid, J. Breck, A. F. Boroujeni, and T. Reps (2017a) Compositional recurrence analysis revisited. ACM SIGPLAN Notices 52 (6), pp. 248–262. Cited by: §7.
  • Z. Kincaid, J. Cyphert, J. Breck, and T. Reps (2017b) Non-linear reasoning for invariant synthesis. Proceedings of the ACM on Programming Languages 2 (POPL), pp. 1–33. Cited by: §7.
  • H. Lin, P. J. Antsaklis, et al. (2014) Hybrid dynamical systems: an introduction to control and verification. Foundations and Trends® in Systems and Control 1 (1), pp. 1–172. Cited by: §1.
  • T. Nguyen, T. Antonopoulos, A. Ruef, and M. Hicks (2017) Counterexample-guided approach to finding numerical invariants. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, pp. 605–615. Cited by: §1, Table 2, §7.
  • T. Nguyen, D. Kapur, W. Weimer, and S. Forrest (2012a) Using dynamic analysis to discover polynomial and array invariants. In Proceedings of the 34th International Conference on Software Engineering, pp. 683–693. Cited by: §6.1.
  • T. Nguyen, D. Kapur, W. Weimer, and S. Forrest (2012b) Using dynamic analysis to discover polynomial and array invariants. In Proceedings of the 34th International Conference on Software Engineering, pp. 683–693. Cited by: §7.
  • T. Nguyen, D. Kapur, W. Weimer, and S. Forrest (2014) DIG: a dynamic invariant generator for polynomial and array invariants. ACM Transactions on Software Engineering and Methodology (TOSEM) 23 (4), pp. 30. Cited by: §7.
  • S. Padhi and T. D. Millstein (2017) Data-driven loop invariant inference with automatic feature synthesis. CoRR abs/1707.02029. External Links: Link, 1707.02029 Cited by: §7.
  • S. Padhi, R. Sharma, and T. Millstein (2016) Data-driven precondition inference with learned features. ACM SIGPLAN Notices 51 (6), pp. 42–56. Cited by: §1, §7.
  • A. Payani and F. Fekri (2019) Inductive logic programming via differentiable deep neural logic networks. arXiv preprint arXiv:1906.03523. Cited by: §7.
  • E. Rodríguez-Carbonell and D. Kapur (2004) Automatic generation of polynomial loop invariants: algebraic foundations. In Proceedings of the 2004 international symposium on Symbolic and algebraic computation, pp. 266–273. Cited by: §7.
  • E. Rodríguez-Carbonell and D. Kapur (2007) Generating all polynomial invariants in simple loops. Journal of Symbolic Computation 42 (4), pp. 443–476. Cited by: §7.
  • G. Ryan, J. Wong, J. Yao, R. Gu, and S. Jana (2020) CLN2INV: learning loop invariants with continuous logic networks. In International Conference on Learning Representations, External Links: Link Cited by: Appendix A, §1, §1, §2.2, §2.3, §4.1, §4.1, §4.2, §6.3, §6.4, §7.
  • D. Selsam and N. Bjørner (2019) Guiding high-performance sat solvers with unsat-core predictions. In International Conference on Theory and Applications of Satisfiability Testing, pp. 336–353. Cited by: §7.
  • D. Selsam, M. Lamm, B. Bünz, P. Liang, L. de Moura, and D. L. Dill (2019) Learning a SAT solver from single-bit supervision. In International Conference on Learning Representations, External Links: Link Cited by: §7.
  • R. Sharma, S. Gupta, B. Hariharan, A. Aiken, P. Liang, and A. V. Nori (2013a) A data driven approach for algebraic loop invariants. In European Symposium on Programming, pp. 574–592. Cited by: §1, §5.1.3, §7.
  • R. Sharma, S. Gupta, B. Hariharan, A. Aiken, and A. V. Nori (2013b) Verification as learning geometric concepts. In International Static Analysis Symposium, pp. 388–411. Cited by: §7.
  • X. Si, H. Dai, M. Raghothaman, M. Naik, and L. Song (2018) Learning loop invariants for program verification. In Advances in Neural Information Processing Systems, pp. 7751–7762. Cited by: §1, §1, §6.4, §7.
  • X. Si, M. Raghothaman, K. Heo, and M. Naik (2019) Synthesizing datalog programs using numerical relaxation. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 6117–6124. Cited by: §7.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting.

    The journal of machine learning research

    15 (1), pp. 1929–1958.
    Cited by: §1, §5.1.3.
  • F. Yang, Z. Yang, and W. W. Cohen (2017) Differentiable learning of logical rules for knowledge base reasoning. In Advances in Neural Information Processing Systems, pp. 2319–2328. Cited by: §7.
  • H. Zhu, S. Magill, and S. Jagannathan (2018) A data-driven chc solver. In ACM SIGPLAN Notices, Vol. 53, pp. 707–721. Cited by: §1, §7.

Appendix A Proof for Theorem 3.1

Property 1. .

Theorem 3.1. For a gated CLN model with input nodes and output node , if all gating parameters are either 0 or 1, then using the formula extraction algorithm, the recovered SMT formula is equivalent to the gated CLN model . That is, ,

(8)
(9)

as long as the t-norm in satisfies Property 1.

Proof.

We prove this by induction.

T-norm Case. If which means the final operation in is a gated t-norm, we know that for each submodel the gating parameters are all either 0 or 1. According to the induction hypothesis, for each , using Algorithm 1, we can extract an equivalent SMT formula satisfying Eq. (8)(9). Now we prove the full model and the extracted formula satisfy Eq. (8). The proof for Eq. (9) is similar and omitted for brevity. For simplicity we use to denote .

Now the proof goal becomes , and the induction hypothesis is

(10)

From line 2-5 in Algorithm 1, we know that the extracted formula is the conjunction of a subset of formulas if the is activated (). Because in our setting all gating parameters are either 0 or 1, then a logical equivalent form of can be derived.

(11)

Recall we are considering the t-norm case where

Using the properties of a t-norm in §2.2 we can prove that

Because the gating parameter is either 0 or 1, we further have

(12)

Combining Eq. (10)(11)(12), we will finally have

T-conorm Case. If which means the final operation in is a gated t-conorm, similar to the t-norm case, for each submodel we can extract its equivalent SMT formula using Algorithm 1 according to the induction hypothesis. Again we just prove the full model and the extracted formula satisfy Eq. (8).

From line 7-10 in Algorithm 1, we know that the extracted formula is the disjunction of a subset of formulas if the is activated (). Because in our setting all gating parameters are either 0 or 1, then a logical equivalent form of can be derived.

(13)