CLN2INV: Learning Loop Invariants with Continuous Logic Networks

09/25/2019 ∙ by Gabriel Ryan, et al. ∙ Columbia University 0

Program verification offers a framework for ensuring program correctness and therefore systematically eliminating different classes of bugs. Inferring loop invariants is one of the main challenges behind automated verification of real-world programs which often contain many loops. In this paper, we present Continuous Logic Network (CLN), a novel neural architecture for automatically learning loop invariants directly from program execution traces. Unlike existing neural networks, CLNs can learn precise and explicit representations of formulas in Satisfiability Modulo Theories (SMT) for loop invariants from program execution traces. We develop a new sound and complete semantic mapping for assigning SMT formulas to continuous truth values that allows CLNs to be trained efficiently. We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms existing approaches on the popular Code2Inv dataset. CLN2INV is the first tool to solve all 124 theoretically solvable problems in the Code2Inv dataset. Moreover, CLN2INV takes only 1.1 second on average for each problem, which is 40 times faster than existing approaches. We further demonstrate that CLN2INV can even learn 12 significantly more complex loop invariants than the ones required for the Code2Inv dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Program verification offers a principled approach for systematically eliminating different classes of bugs and proving the correctness of programs. However, as programs have become increasingly complex, real-world program verification often requires prohibitively expensive manual effort (Wilcox et al., 2015; Gu et al., 2016; Chajed et al., 2019). Recent efforts have focused on automating the program verification process, but automated verification of general programs with unbounded loops remains an open problem (Nelson et al., 2017, 2019).

Verifying programs with loops requires determining loop invariants, which captures the effect of the loop on the program state irrespective of the actual number of loop iterations. Automatically inferring correct loop invariants is a challenging problem that is undecidable in general and difficult to solve in practice (Blass & Gurevich, 2001; Furia et al., 2014). Existing approaches use stochastic search (Sharma & Aiken, 2016), heurstics-based search (Galeotti et al., 2015), PAC learning based on counter examples  (Padhi & Millstein, 2017)

, or reinforcement learning 

(Si et al., 2018). However, these approaches often struggle to learn complex, real-world loop invariants.

In this paper, we introduce a new approach to learning loop invariants by modeling the loop behavior from program execution traces using a new type of neural architecture. We note that inferring loop invariants can be posed as learning formulas in Satisfiability Modulo Theories (SMT) (Biere et al., 2009) over program variables collected from program execution traces  (Nguyen et al., 2017). In principle, Neural networks seem well suited to this task because they can act as universal function approximators and have been successfully applied in various domains that require modeling of arbitrary functions (Hornik et al., 1989; Goodfellow et al., 2016). However, loop invariants must be represented as explicit SMT formulas to be usable for program verification. Unfortunately, existing methods for extracting logical rules from general neural architectures lack sufficient precision (Augasta & Kathirvalavakumar, 2012), while inductive logic learning lacks sufficient expressiveness for use in verification (Evans & Grefenstette, 2018).

We address this issue by developing a novel neural architecture, Continuous Logic Network (CLN), which is able to efficiently learn explicit and precise representations of SMT formulas by using continuous truth values. Unlike existing neural architectures, CLNs can represent a learned SMT formula explicitly in its structure and thus allow us to precisely extract the exact formula from a trained model.

In order to train CLNs, we introduce a new semantic mapping for SMT formulas to continuous truth values. Our semantic mapping builds on BL, or basic fuzzy logic (Hájek, 2013), to support general SMT formulas in a continuous logic setting. We further prove that our semantic model is sound (i.e., truth assignments for the formulas are consistent with their discrete counterparts) and complete (i.e., all formulas can be represented) with regard to the discrete SMT formula space. These properties allow CLNs to represent any quantifier-free SMT formula operating on mixed integer-real arithmetic as an end-to-end differentiable series of operations.

We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms state-of-the-art tools on the Code2Inv dataset by solving all 124 theoretically solvable problems in the dataset. This is 20 problems more than LoopInvGen, the winner of the SyGus 2018 competition loop invariant track (Si et al., 2018). Moreover, CLN2INV finds invariants for each program in 1.1 seconds on average, more than 40 times faster than LoopInvGen. We also demonstrate CLN2INV is able to learn complex, real-world loop invariants with combinations of conjunctions and disjunctions of multivariable constraints.

Our main contributions are:

  • [topsep=2pt]

  • We introduce a new semantic mapping for assigning continuous truth values to SMT formulas that is theoretically grounded and enables learning formulas through backpropagation. We further prove that our semantic model is sound and complete.

  • We develop a novel neural architecture, Continuous Logic Networks (CLNs), that to the best of our knowledge is the first to efficiently learn precise and explicit SMT formulas by construction.

  • We use CLNs to implement a new loop invariant inference system, CLN2INV, that is the first to solve all 124 theoretically solvable problems in the Code2Inv dataset, 20 more than the existing methods. CLN2INV is able to find invariants for each problem in 1.1 seconds on average, 40 faster than existing systems.

  • We further show CLN2INV is able to learn 12 more complex loop invariants than the ones present in the Code2Inv dataset with combinations of multivariable constraints.

Related Work.

Traditionally, loop invariant learning has relied on stochastic or heuristics-guided search

(Sharma & Aiken, 2016; Galeotti et al., 2015). Other approaches like NumInv analyze traces and discover simple conjunctions of equalities by solving a system of linear equations (Nguyen et al., 2017). LoopInvGen uses PAC learning of CNF using counter-examples (Padhi & Millstein, 2017). By contrast, Code2Inv learns to guess loop invariants using reinforcement learning with recurrent and graph neural networks (Si et al., 2018). However, these approaches struggle to learn complex invariants. Unlike these works, CLN2INV can efficiently learn complex invariants directly from execution traces.

There is a long line of work on PAC learning of boolean formulas, but learning precise formulas require a prohibitively large number of samples (Kearns et al., 1994). Several recent works use different forms of differentiable logic to learn boolean logic formulas from noisy data (Kimmig et al., 2012; Evans & Grefenstette, 2018; Payani & Fekri, 2019) or improving adversarial robustness by applying logical rules to training (Fischer et al., 2019). By contrast, our work learns precise SMT formulas directly by construction, allowing us to learn richer predicates with compact representation in a noiseless setting.

2 Background

In this section, we introduce the problem of inferring loop invariants and provide a brief overview of Satisfiability Modulo Theories (SMT), which are used to represent loop invariants. We then provide an introduction to basic fuzzy logic, which we later extend to formulate our new continuous semantic mapping for SMT.

2.1 Loop Invariants

Loop invariants capture loop behavior irrespective of number of iterations, which is crucial for verifying programs with loops. Given a loop, while(){}, a precondition , and a post-condition , the verification task involves finding a loop invariant that can be concluded from the pre-condition and implies the post-condition (Hoare, 1969). Formally, it must satisfy the following three conditions, in which the second is a Hoare triple describing the loop:

Example of Loop Invariant. Consider the example loop in Fig.1. For a loop invariant to be usable, it must be valid for the precondition , the recursion step when , and the post condition when the loop condition is no longer satisfied, i.e., . The correct and precise invariant for the program is .

[//pre: t=10 /\ u=0
while (t != 0){
  t = t - 1;
  u = u + 2;
}
[//post: u=2
(a) Example loop

The desired loop invariant for the left program is a boolean

function over program variables such that:

(b) The desired and precise loop invariant is .
Figure 1: Example Loop Invariant inference problem.

2.2 Satisfiability Modulo Theories

Satisfiability Modulo Theories (SMT) are an extension of Boolean Satisfiability that allow solvers to reason about complex problems efficiently. Loop invariants and other formulas in program verification are usually encoded with quantifier-free SMT. A formula in quantifier-free SMT can be inductively defined as below:

where and are expressions of terms. The loop invariant in Fig. 1 is an SMT formula. Nonlinear arithmetic theories admit higher-order terms such as and , allowing them to express more complex constraints. For example, is an SMT formula that is true when the value of the high-order term is larger than 2.

2.3 Basic Fuzzy Logic (BL)

Basic fuzzy logic (BL) is a class of logic that uses continuous truth values in the range and is differentiable almost everywhere111

Almost everywhere indicates the function is differentiable everywhere except for a set of measure 0. For example, a Rectified Linear Unit is differentiable almost everywhere except at zero.

(Hájek, 2013). BL defines logical conjunction with functions called t-norms, which must satisfy specific conditions to ensure that the behavior of the logic is consistent with boolean First Order Logic. Formally, a t-norm (denoted ) in BL is a continuous binary operator over continuous truth values satisfying the following conditions:

  • [topsep=2pt]

  • 1) commutativity and associativity: the order in which a set of t-norms on continuous truth values are evaluated should not change the result.

    .

  • 2) monotonicity: increasing any input value to a t-norm operation should not cause the result to decrease.

  • 3) consistency: the result of any t-norm applied to a truth value and 1 should be 1, and the result of any truth value and 0 should be 0.

    .

3 Continuous Satisfiability Modulo Theories

We introduce a continuous semantic mapping, , for SMT on BL that is end-to-end differentiable. The mapping associates SMT formulas with continuous truth values while preserving each formula’s semantics. In this paper, we only consider quantifier-free formulas. This process is analogous to constructing t-norms for BL, where a t-norm operates on continuous logical inputs.

We define three desirable properties for continuous semantic mapping that will preserve formula semantics while facilitating parameter training with gradient descent:

  1. [topsep=2pt]

  2. should be consistent with BL. For any two formulas and , where is satisfied and is unsatisfied with an assignment of formula terms, we should have . This will ensure the semantics of SMT formulas are preserved.

  3. should be differentiable almost everywhere. This will facilitate training with gradient descent through backpropogation.

  4. should be increasing everywhere as the terms in the formula approach constraint satisfaction, and decreasing everywhere as the terms in the formula approach constraint violation. This ensures there is always a nonzero gradient for training.

Continuous semantic mapping. We first define the mapping for “” (greater-than) and “” (greater-than-or-equal-to) as well as adopting definitions for “”, “”, and “” from BL. All other operators can be derived from these. For example, “” (less-than-or-equal-to) is derived using “” and “”, while “” (equality) is then defined as the conjunction of formulas using “” and “.” Given constants and , we first define the the mapping on “” and “

” using shifted and scaled sigmoid functions:

(a) Plot of , with sigmoid
(b) Plot of with product t-norm
Figure 2: Illustration of the mapping on when and

Illustrations of shifted sigmoids for , , and are given in Figure 2. The validity of our semantic mapping lie in the following facts, which can be proven with basic algebra.

When goes to zero and goes to infinity, our continuous mapping of “” and “” will preserve their original semantics. Under these conditions, our mapping satisfies all three desirable properties. In practice, for small and large , the properties are also satisfied if .

Next we define the mapping for boolean operators “”, “” and “” using BL. Recall that in BL, a t-norm is a continuous function that behaves like logical conjunction. In §2.3, we outlined requirements for a valid t-norm. Three widely used t-norms that satisfy the requirements are the Lukaseiwicz t-norm (Lukasiewicz, 1930), the Godel t-norm (Baaz et al., 1996), and the product t-norm (Hájek et al., 1996). Each t-norm has a t-conorm associated with it (denoted ), which can be considered as logical disjunction. Given a t-norm , the t-conorm can be derived with DeMorgan’s law: .

Godel: Product:

Given a specific t-norm and its corresponding t-conorm , it is straightforward to define mappings of “”, “” and “”:

Based on the above definitions, the mapping for other operators can be derived as follows:

The mapping on “” is valid since the following limit holds (see Appendix A for the proof).

The mapping for other operators shares similar behavior in the limit, and also fulfill our desired properties under the same conditions.

Using our semantic mapping , most of the standard operations of integer and real arithmetic, including addition, subtraction, multiplication, division, and exponentiation, can be used normally and mapped to continuous truth values while keeping the entire formula differentiable. Moreover, any expression in SMT that has an integer or real-valued result can be mapped to continuous logical values via these formulas, although end-to-end differentiability may not be maintained in cases where specific operations are nondifferentiable.

4 Continuous Logic Networks

In this section, we describe the construction of Continuous Logic Networks (CLNs) based on our continuous semantic mapping for SMT on BL.

CLN Construction. CLNs use our continuous semantic mapping to represent SMT formulas as an end-to-end differentiable model that can learn selected constants in the formula. When constructing a CLN, we work from an SMT Formula Template, in which every value is marked as either an input term, a constant, or a learnable parameter. Given an SMT Formula Template, we dynamically construct a CLN as a computational graph, where input terms are marked as model inputs. The operations in each SMT clause are recursively added to the graph, followed by logical operations on clauses. Figure 3 shows an example formula template and the constructed CLN. We denote the CLN model constructed from the formula template as .

CLN Training. Once the CLN has been constructed based on a formula template, it is trained with the following optimization. Given a CLN model constructed from an SMT template with learnable parameters , and a set

of valid assignments for the terms in the SMT template, the expected value of the CLN is maximized by minimizing a loss function

that penalizes model outputs that are less than one. A minimum scaling factor is selected, and a hinge loss is applied to the scaling factors () to force the differentiable predicates to approach sharp cutoffs. The offset is also regularized to ensure precision. The overall optimization is formulated as:

where and

are hyperparameters respectively governing the weight assigned to the scaling factor and offset regularization.

is defined as , and is any loss function strictly decreasing in domain .

Given a CLN that has been trained to a loss approaching 0 on a given set of valid assignments, we now show that the resulting continuous SMT formula learned by the CLN is consistent with an equivalent boolean SMT formula. In particular, we prove that continuous SMT formulas learned with CLNs are sound and complete with regard to SMT formulas on discrete logic. We further prove that a subset of SMT formulas are guaranteed to converge to a globally optimal solution.

Soundness. Given the SMT formula , the CLN model constructed from always preserves the truth value of . It indicates that given a valid assignment to the terms in , and .

Completeness. For any SMT formula , a CLN model can be constructed representing that formula. In other words, CLNs can express all SMT formulas on integers and reals.

We formally state these properties in Theorem 1 and provide a proof by induction on the constructor in the Appendix B. Before that we need to define a property for t-norms.

Property 1. .

The product t-norm and Godel t-norm have this property, while the Lukasiewicz t-norm does not.

Theorem 1.

For any SMT formula , there exists a CLN model , such that

as long as the t-norm used in building satisfies Property 1.

Optimality. For a subset of SMT formulas (conjunctions of multiple linear equalities), CLNs are guaranteed to converge at the global minumum. We formally state this in Theorem 2 and the proof can be found in Appendix C. We first define another property similar to strict monotonicity.

Property 2. ).

Theorem 2.

For any CLN model constructed from a formula, , by the procedure shown in the proof of Theorem 1, if is the conjunction of multiple linear equalities then any local minimum of is the global minimum, as long as the t-norm used in building satisfies Property 2.

5 Loop Invariant Learning

We use CLNs to implement a new inference system for loop invariants, CLN2INV, which learns invariants directly from execution traces. Figure 3 provides an overview of the architecture.

Training Data Generation.

We generate training data by running the program repeatedly on a set of randomly initialized inputs that satisfy the preconditions. Unconstrained variables are initialized from a uniform distribution, and variables with precondition constraints are initialized from a uniform distribution within their constraints. All program variables are recorded before each execution of the loop and after the loop terminates.

Template Generation. We encode the template using information gathered through static analysis. We collect useful information such as constants found in the program code along with the termination condition. Our analysis also strengthens the precondition and weakens the post-condition to constrain the problem as tightly as possible. For instance, unconstrained variables can be constrained to ensure the loop executes. In Appendix D, we prove this approach maintains soundness as solutions to the constrained problem can be used to reconstruct full solutions.

We generate bounds for individual variables (e.g., ) as well as multivariable polynomial constraints (e.g., ). Constants are optionally placed in constraints based on the static analysis and execution data (i.e. if a variable is initialized to a constant and never changes). We then compose template formulas from the collection of constraints by selecting a subset and joining them with conjunctions or disjunctions.

CLN Construction and Training. Once a template formula has been generated, a CLN is constructed from the template using the formulation in §4. As an optimization, we represent equality constraints as Gaussian-like functions that retain a global maximum when the constraint is satisfied as discussed in Appendix E. We then train the model using the collected execution traces.

Figure 3: System architecture and CLN construction from SMT templates.

Invariant Checking. Invariant checking is performed using SMT solvers such as Z3 (De Moura & Bjørner, 2008). After the CLN for a formula template has been trained, the SMT formula for the loop invariant is recovered by normalizing the learned parameters. The invariant is checked against the pre, post, and recursion conditions as described in §2.1.

6 Experiments

We compare the performance of CLN2INV with two existing methods and demonstrate the efficacy of the method on several more difficult problems. Finally, we conduct two ablation studies to justify our design choices.

Test Environment. All experiments are performed on an Ubuntu 18.04 server with an Intel Xeon E5-2623 v4 2.60GHz CPU, 256Gb of memory, and an Nvidia GTX 1080Ti GPU.

System Configuration.

We implement CLNs in PyTorch and use the Adam optimizer for training with learning rate

(Paszke et al., 2017; Kingma & Ba, 2014). Because of the initialization dependency of neural networks, the CLN training randomly restart if the model does not reach termination within epochs. Learnable parameters are initialized from a uniform distribution in the range [-1, 1], which we found works well in practice.

Test Dataset. We use the same benchmark used in the evaluation of Code2Inv. We have removed nine invalid programs from Code2Inv’s benchmark and test on the remaining 124. The removed programs are invalid because there are inputs which satisfy the precondition but result in a violation of the post-condition after the loop execution. The benchmark consists of loops expressed as C code and corresponding SMT files. Each loop can have nested if-then-else blocks (without nested loops). Programs in the benchmark may also have uninterpreted functions (emulating external function calls) in branches or loop termination conditions.

6.1 Comparison to existing solvers

(a) Runtime performance.
(b) SMT solver calls.
Figure 4: Performance evaluation.

Performance Evaluation. We compare CLN2INV to two state-of-the-art methods: Code2Inv (based on neural code representation and reinforcement learning) and LoopInvGen (PAC learning over synthesized CNF formulas) (Si et al., 2018; Padhi et al., 2016). We limit each method to one hour per problem in the same format as the SyGuS Competition (Alur et al., 2019). Table 1 summarizes the results of the evaluation. CLN2INV is able to solve all 124 problems in the benchmark. LoopInvGen solves 104 problems while Code2inv solves 90.222The Code2Inv authors originally reported solving 92 problems with the same one hour timeout. We believe that the difference might be caused by changes in the testing environment or randomized model initialization.

Runtime Comparison. Figure 3(a) shows the measured runtime on each evaluated system. CLN2INV solves problems in 1.1 second on average, which is over 40 faster than LoopInvGen, the second fastest system in the evaluation. In general, CLN2INV has similar performance to LoopInvGen on simple problems, but is able to scale efficiently to complex problems.

Z3 Solver Utilization. Figure 3(b) shows the number of Z3 calls made by each method. For almost all problems, CLN2INV requires fewer Z3 calls than the other systems, although for some difficult problems it uses more Z3 calls than Code2Inv. While CLN2INV makes roughly twice as many Z3 solver calls as Code2Inv on average, it is able to generate and test candidate loop invariants over 250 faster on average.

Performance Summary Table 1 summarizes results of the performance evaluation. CLN2INV has the lowest time spent per problem making it the most practical approach. Code2Inv require more time on average per problem, but minimizes the number of calls made to an SMT solver. LoopInvGen is efficient at generating a large volume of guesses for the SMT solver. CLN2INV achieves a balance by producing quality guesses quickly allowing it to solve problems efficiently.

Method
Number Solved
Avg Time (s)
Avg Z3 Calls
Time/Z3 Call (s)
Code2Inv 90 266.71 16.62 50.89
LoopInvGen 104 45.11 3,605.43 0.08
CLN2INV 124 1.07 31.77 0.17
Table 1: Results and summary statistics for performance evaluation.

6.2 More Difficult Loop Invariants

We construct 12 additional problems to demonstrate CLN2INV’s ability to infer complex loop invariants. We design these problems to have two key characteristics, which are absent in the Code2Inv dataset: (i) they require invariants involving conjunctions and disjunctions of multivariable constraints, and (ii) the invariant cannot easily be identified by inspecting the precondition, termination condition, or post-condition. CLN2INV is able to find correct invariants for all 12 problems in less than 20 seconds, while Code2Inv and LoopInvGen time out after an hour (see Appendix F).

6.3 Ablation Studies

Effect of CLN Training on Performance. CLN2INV relies on a combination of heuristics using static analysis and learning formulas from execution traces to correctly infer loop invariants. In this ablation we disable model training and limit CLN2INV to static models with no learnable parameters. The static CLN2INV solves 91 problems in the dataset. Figure 5 shows a comparison of full CLN2INV with one limited to static models. CLN2INV’s performance with training disabled shows that a large number of problems in the dataset are relatively simple and invariants can be inferred from basic heuristics. However, for more difficult problems, the ability of CLNs to learn SMT formulas is key to successfully finding correct invariants.

(a) Runtime performance.
(b) SMT solver calls.
Figure 5: Ablation study: Comparing Static vs Trained Models

Comparing t-norms. We compare the effect of different t-norms (Godel, Lukasiewicz, and Product) on convergence time in Table 2. All t-norms have very similar performance when used for conjunctions, but the product t-conorm converges faster than the other t-conorms on average. See Appendix G for more details.

Problem
Godel (iterations)
Lukasiewicz (iterations)
Product (iterations)
Conjunction 967 966 966
Disjunction 1,074 1,221 984
Table 2: Table with average iterations to convergence (average taken over 5 runs)

7 conclusion

We develop a novel neural architecture that explicitly and precisely learns SMT formulas by construction. We achieve this by introducing a new sound and complete semantic mapping for SMT that enables learning formulas through backpropagation. We use CLNs to implement a loop invariant inference system, CLN2INV, that is the first to solve all theoretically solvable problems in the Code2Inv benchmark and takes only 1.1 seconds on average. We believe that the CLN architecture will also be beneficial for other domains that require learning SMT formulas.

References

  • Alur et al. (2019) Rajeev Alur, Dana Fisman, Saswat Padhi, Rishabh Singh, and Abhishek Udupa. Sygus-comp 2018: Results and analysis. CoRR, abs/1904.07146, 2019. URL http://arxiv.org/abs/1904.07146.
  • Augasta & Kathirvalavakumar (2012) M. Gethsiyal Augasta and T. Kathirvalavakumar. Rule extraction from neural networks — a comparative study.

    International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)

    , pp. 404–408, 2012.
  • Baaz et al. (1996) Matthias Baaz et al. Infinite-valued gödel logics with --projections and relativizations. In Gödel’96: Logical foundations of mathematics, computer science and physics—Kurt Gödel’s legacy, Brno, Czech Republic, August 1996, proceedings, pp. 23–33. Association for Symbolic Logic, 1996.
  • Biere et al. (2009) A. Biere, H. van Maaren, and T. Walsh.

    Handbook of Satisfiability: Volume 185 Frontiers in Artificial Intelligence and Applications

    .
    IOS Press, Amsterdam, The Netherlands, The Netherlands, 2009. ISBN 1586039296, 9781586039295.
  • Blass & Gurevich (2001) Andreas Blass and Yuri Gurevich. Inadequacy of computable loop invariants. ACM Transactions on Computational Logic (TOCL), 2(1):1–11, 2001.
  • Chajed et al. (2019) Tej Chajed, Joseph Tassarotti, M. Frans Kaashoek, and Nickolai Zeldovich. Verifying concurrent, crash-safe systems with perennial. In Proceedings of the 27th Symposium on Operating Systems Principles. ACM, 2019.
  • De Moura & Bjørner (2008) Leonardo De Moura and Nikolaj Bjørner. Z3: An efficient smt solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 337–340. Springer, 2008.
  • Evans & Grefenstette (2018) Richard Evans and Edward Grefenstette. Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research, 61:1–64, 2018.
  • Fischer et al. (2019) Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, and Martin Vechev. Dl2: Training and querying neural networks with logic. In

    International Conference on Machine Learning

    , pp. 1931–1941, 2019.
  • Furia et al. (2014) Carlo A Furia, Bertrand Meyer, and Sergey Velder. Loop invariants: Analysis, classification, and examples. ACM Computing Surveys (CSUR), 46(3):34, 2014.
  • Galeotti et al. (2015) Juan P Galeotti, Carlo A Furia, Eva May, Gordon Fraser, and Andreas Zeller. Inferring loop invariants by mutation, dynamic analysis, and static checking. IEEE Transactions on Software Engineering, 41(10):1019–1037, 2015.
  • Goodfellow et al. (2016) Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
  • Gu et al. (2016) Ronghui Gu, Zhong Shao, Hao Chen, Xiongnan Newman Wu, Jieung Kim, Vilhelm Sjöberg, and David Costanzo. Certikos: An extensible architecture for building certified concurrent os kernels. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), pp. 653–669, 2016.
  • Hájek (2013) Petr Hájek. Metamathematics of fuzzy logic, volume 4. Springer Science & Business Media, 2013.
  • Hájek et al. (1996) Petr Hájek, Lluís Godo, and Francesc Esteva. A complete many-valued logic with product-conjunction. Archive for mathematical logic, 35(3):191–208, 1996.
  • Hoare (1969) Charles Antony Richard Hoare. An axiomatic basis for computer programming. Communications of the ACM, 12(10):576–580, 1969.
  • Hornik et al. (1989) Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359–366, 1989.
  • Kearns et al. (1994) Michael J Kearns, Umesh Virkumar Vazirani, and Umesh Vazirani.

    An introduction to computational learning theory

    .
    MIT press, 1994.
  • Kimmig et al. (2012) Angelika Kimmig, Stephen Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. A short introduction to probabilistic soft logic. In Proceedings of the NIPS Workshop on Probabilistic Programming: Foundations and Applications, pp. 1–4, 2012.
  • Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Lukasiewicz (1930) Jan Lukasiewicz. Untersuchungen uber den aussagenkalkul. CR des seances de la Societe des Sciences et des Letters de Varsovie, cl. III, 23, 1930.
  • Nelson et al. (2017) Luke Nelson, Helgi Sigurbjarnarson, Kaiyuan Zhang, Dylan Johnson, James Bornholt, Emina Torlak, and Xi Wang. Hyperkernel: Push-button verification of an os kernel. In Proceedings of the 26th Symposium on Operating Systems Principles, pp. 252–269. ACM, 2017.
  • Nelson et al. (2019) Luke Nelson, James Bornholt, Ronghui Gu, Andrew Baumann, Emina Torlak, and Xi Wang. Scaling symbolic evaluation for automated verification of systems code with serval. In Symposium on Operating Systems Principles, 2019.
  • Nguyen et al. (2017) ThanhVu Nguyen, Timos Antonopoulos, Andrew Ruef, and Michael Hicks. Counterexample-guided approach to finding numerical invariants. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, pp. 605–615. ACM, 2017.
  • Padhi & Millstein (2017) Saswat Padhi and Todd D. Millstein. Data-driven loop invariant inference with automatic feature synthesis. CoRR, abs/1707.02029, 2017. URL http://arxiv.org/abs/1707.02029.
  • Padhi et al. (2016) Saswat Padhi, Rahul Sharma, and Todd Millstein. Data-driven precondition inference with learned features. ACM SIGPLAN Notices, 51(6):42–56, 2016.
  • Paszke et al. (2017) Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
  • Payani & Fekri (2019) Ali Payani and Faramarz Fekri. Inductive logic programming via differentiable deep neural logic networks. arXiv preprint arXiv:1906.03523, 2019.
  • Sharma & Aiken (2016) Rahul Sharma and Alex Aiken. From invariant checking to invariant inference using randomized search. Formal Methods in System Design, 48(3):235–256, 2016.
  • Si et al. (2018) Xujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, and Le Song. Learning loop invariants for program verification. In Advances in Neural Information Processing Systems, pp. 7751–7762, 2018.
  • Wilcox et al. (2015) James R Wilcox, Doug Woos, Pavel Panchekha, Zachary Tatlock, Xi Wang, Michael D Ernst, and Thomas Anderson. Verdi: a framework for implementing and formally verifying distributed systems. ACM SIGPLAN Notices, 50(6):357–368, 2015.

Appendix A Proof of limit of

Proof.

Let and . Then what we want to prove becomes

Because all are continuous in their domain, we have

Using basic algebra, we get

Combing these results, we have

For any t-norm, we have , , and . Put it altogether, we have

which concludes the proof. ∎

Appendix B Proof of Theorem 1

Theorem 1. For any quantifier-free linear SMT formula , there exists CLN model , such that

(1)
(2)
(3)

as long as the t-norm used in building satisfies Property 1.

Proof.

For convenience of the proof, we first remove all , , and in , by transforming into , into , into , and into . Now the only operators that may contain are . We prove Theorem 1 by induction on the constructor of formula . In the following proof, we construct model given and show it satisfied Eq.(1)(2). We leave the proof for why also satisfied Eq.(3) to readers.

Atomic Case. When is an atomic clause, then will be in the form of or . For the first case, we construct a linear layer with weight and bias followed by a sigmoid function scaled with factor and right-shifted with distance . For the second case, we construct the same linear layer followed by a sigmoid function scaled with factor and left-shifted with distance . Simply evaluating the limits for each we arrive at

And from the definition of sigmoid function we know .

Negation Case. If , from the induction hypothesis, can be represented by models satisfying Eq.(1)(2)(3). Let output node of . We add a final output node . So . Using the induction hypothesis , we conclude Eq.(1) .

Now we prove the “” side of Eq.(2). If , then . From the induction hypothesis, we know . So

Next we prove the “” side. If , we have

From the induction hypothesis we know that . So .

Conjunction Case. If , from the induction hypothesis, and can be represented by models and , such that both and satisfy Eq.(1)(2)(3). Let and be the output nodes of and . We add a final output node . So . Since is continuous and so are and , we know their composition is also continuous. (Readers may wonder why is continuous. Actually the continuity of should be proved inductively like this proof itself, and we omit it for brevity.) From the definition of (, we have Eq.(1) .

Now we prove the side of Eq.(2). For any , if which means both and , from the induction hypothesis we know that and . Then

Then we prove the side. From the induction hypothesis we know that and . From the non-decreasing property of t-norms (see §2.3), we have

Then from the consistency property and the commutative property, we have

Put them altogether we get

Because we know , according to the squeeze theorem in calculus, we get

From the induction hypothesis, we know that . We can prove in the same manner. Finally we have .

Disjunction Case. For the case , we construct from and as we did in the conjunctive case. This time we let the final output node be . From the continuity of () and the definition of () (), is also continuous. We conclude is also continuous and by the same argument as .

Now we prove the “” side of Eq.(2). For any assignment , if which means or . Without loss of generality, we assume . From the induction hypothesis, we know .

For any () and any , if , then

Using this property and the induction hypothesis , we have

From the induction hypothesis we also have . Using the definition of () and the consistency of () (), we get . Put them altogether we get

Because we know , according to the squeeze theorem in calculus, we get .

Then we prove the “” side. Here we need to use the existence of limit:

This property can be proved by induction like this proof itself, thus omitted for brevity.

Let

and

Then

Since we have , we get

Using Property 1 of () (defined in §4), we have . Without loss of generality, we assume . From the induction hypothesis, we know that . Finally, .

Careful readers may have found that if we use the continuous mapping function in §3, then we have another perspective of the proof above, which can be viewed as two interwoven parts. The first part is that we proved the following lemma.

Corollary 1.

For any quantifier-free linear SMT formula ,

Corollary 1 indicates the soundness of . The second part is that we construct a CLN model given . In other words, we translate into vertices in a computational graph composed of differentiable operations on continuous truth values.

Appendix C Proof of Theorem 2

Theorem 2. For any CLN model constructed from a formula, , by the procedure shown in the proof of Theorem 1, if is the conjunction of multiple linear equalities then any local minima of is the global minima, as long as the t-norm used in building satisfies Property 2.

Proof.

Since is the conjunction of linear equalities, it has the form

Here are the learnable weights, and are terms (variables). We omit the bias in the linear equalities, as the bias can always be transformed into a weight by adding a constant of 1 as a term. For convenience, we define .

Given an assignment of the terms , if we construct our CLN model following the procedure shown in the proof of Theorem 1, the output of the model will be

When we train our CLN model, we have a collection of data points which satisfy formula . If and are fixed (unlearnable), then the loss function will be

(4)

Suppose is a local minima of . We need to prove is also the global minima. To prove this, we use the definition of a local minima. That is,

(5)

For convenience, we denote . Then we rewrite Eq.(4) as

If we can prove at , . Then because (i) reaches its global maximum at , (ii) the t-norm is monotonically increasing, (iii) is monotonically decreasing, we can conclude that is the global minima.

Now we prove . Here we just show the case . The proof for can be directly derived using the associativity of .

Let . Since for all , using Property 2 of our t-norm , we know that . Now the loss function becomes

From Eq.(5), we have

(6)

Because (i) is an even function decreasing on (which can be easily proved), (ii) is monotonically increasing, (iii) is monotonically decreasing, for , we have

(7)

Combing Eq.(6) and Eq.(7), we have

Now we look back on Eq.(7). Since (i) is strictly decreasing, (ii) the t-norm we used here has Property 2 (see §4 for definition), (iii) , the only case when () holds is that for all , we have . Since is strictly decreasing for , we have . Finally because , we have . ∎

Appendix D Theorem 3 and the proof

Theorem 3. Given a program : assume(); while () {} assert();
If we can find a loop invariant for program : assume(); while () {} assert();
and , then is a correct loop invariant for program .

Proof.

Since is a loop invariant of , we have

We want to prove is a valid loop invariant of C, which means

We prove the three propositions separately. To prove , we transform it into a stronger proposition , which directly comes from (a).

For , after simplification it becomes , which is a direct corollary of (b).

For , after simplification it will become two separate propositions, and . The former is exactly (c), and the latter is a known condition in the theorem. ∎

Appendix E Properties of Gaussian Function

We use a Gaussian-like function to represent equalities in our experiments. It has the following two properties. First, it preserves the original semantic of when , similar to the mapping we defined in §3.

Second, if we view as a function over , then it reaches its only local maximum at , which means the equality is satisfied.

Appendix F More Difficult Loop Invariant

[//pre: t=-20/\u=-20
while (u != 0) {
  t++;
  if (u > 0)
    t = -u + 1;
  else
    t = -u - 1;
}
[//post: t=
(a) Example Pseudocode
(b) Plotted trace of program
Figure 6: Finding the Loop Invariant of Problem 1.

Description of Problems. In this section, we discuss in detail two of the more difficult loop invariant problems we have created. Consider the first example loop in Fig.6. For a loop invariant to be usable in verifying this loop, it must be valid for the precondition , the recursion step representing loop execution when the loop condition is satisfied , and the post condition when the loop condition is no longer satisfied . The correct and precise invariant for the program is . The plot of the trace in Fig.5(b) shows that the points lie on one of two lines expressible as linear equality constraints. These constraints along with the inequality can be learned from the execution trace in under 20 seconds. The invariant that is inferred is , which Z3 verifies is a sufficient loop invariant.

Figure 7 shows the pseudocode for the second program. The correct and precise loop invariant is . Our CLN can learn the this invariant with the t-norm in under 20 seconds. Both Code2inv and LoopInvGen time out within one hour without finding a solution.

[//pre: t=-10 /\ u=10 /\ v=-10 /\ w=10
while (u + w > 0) {
    if (unknown()) {
        t++; u--;
    } else {
        v++;