Learning Nonlinear Loop Invariants with Gated Continuous Logic Networks

03/17/2020 ∙ by Jianan Yao, et al. ∙ Columbia University 0

In many cases, verifying real-world programs requires inferring loop invariants with nonlinear constraints. This is especially true in programs that perform many numerical operations, such as control systems for avionics or industrial plants. Recently, data-driven methods for loop invariant inference have gained popularity, especially on linear loop invariants. However, applying data-driven inference to nonlinear invariants is challenging due to the large numbers of and large magnitudes of high-order terms, the potential for overfitting on samples, and the large space of possible nonlinear inequality bounds. In this paper, we introduce a new neural architecture for general SMT learning, the Gated Continuous Logic Network (G-CLN), and apply it to nonlinear loop invariant learning. G-CLNs extend the Continuous Logic Network architecture with gating units and dropout, which allow the model to robustly learn general invariants over large numbers of terms. To address overfitting that arises from finite program sampling, we introduce fractional sampling—a sound relaxation of loop semantics to continuous functions that facilitates unbounded sampling on the real domain. We also design a new CLN activation function, the Piecewise Biased Quadratic Unit (PBQU), for naturally learning tight inequality bounds. We incorporate these methods into a nonlinear loop invariant inference system that can learn general nonlinear loop invariants. We evaluate our system on a benchmark of nonlinear loop invariants and show it solves 26 out of 27 problems, 3 more than prior work, with an average runtime of 53.3 seconds. We further demonstrate the generic learning ability of G-CLNs by solving all 124 problems in the linear Code2Inv benchmark. We also perform a quantitative stability evaluation and show G-CLNs have a convergence rate of 97.5% on quadratic problems, a 39.2% improvement over CLN models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Proof for Theorem 3.1

Property 1. .

Theorem 3.1. For a gated CLN model with input nodes and output node , if all gating parameters are either 0 or 1, then using the formula extraction algorithm, the recovered SMT formula is equivalent to the gated CLN model . That is, ,

(1)
(2)

as long as the t-norm in satisfies Property 1.

Proof.

We prove this by induction.

T-norm Case. If which means the final operation in is a gated t-norm, we know that for each submodel the gating parameters are all either 0 or 1. According to the induction hypothesis, for each , using Algorithm 1, we can extract an equivalent SMT formula satisfying Eq. (1)(2). Now we prove the full model and the extracted formula satisfy Eq. (1). The proof for Eq. (2) is similar and omitted for brevity. For simplicity we use to denote .

Now the proof goal becomes , and the induction hypothesis is

(3)

From line 2-5 in Algorithm 1, we know that the extracted formula is the conjunction of a subset of formulas if the is activated (). Because in our setting all gating parameters are either 0 or 1, then a logical equivalent form of can be derived.

(4)

Recall we are considering the t-norm case where

Using the properties of a t-norm in §2.2 we can prove that

Because the gating parameter is either 0 or 1, we further have

(5)

Combining Eq. (3)(4)(5), we will finally have

T-conorm Case. If which means the final operation in is a gated t-conorm, similar to the t-norm case, for each submodel we can extract its equivalent SMT formula using Algorithm 1 according to the induction hypothesis. Again we just prove the full model and the extracted formula satisfy Eq. (1).

From line 7-10 in Algorithm 1, we know that the extracted formula is the disjunction of a subset of formulas if the is activated (). Because in our setting all gating parameters are either 0 or 1, then a logical equivalent form of can be derived.

(6)

Under the t-conorm case, we have

Using Property 1 we can prove that

Because the gating parameter is either 0 or 1, we further have

(7)

Combining Eq. (3)(6)(7), we will finally have

Negation Case. If which means the final operation is a negation, from the induction hypothesis we know that using Algorithm 1 we can extract an SMT formula from submodel satisfying Eq. (1)(2). From line 11-12 in Algorithm 1 we know that the extracted formula for is . Now we show such an satisfy Eq. (1)(2).

Atomic Case. In this case, the model consists of only a linear layer and an activation function, with no logical connectiveness. This degenerates to the atomic case for the ungated CLN in (Ryan et al., 2020) where the proof can simply be reused. ∎

2. Proof for Theorem 3.2

Recall our continuous mapping for inequalities.

(8)

Theorem 3.2. Given a set of -dimensional samples with the maximum L2-norm , if and , and the weights are constrained as , then when the model converges, the learned inequality has distance at most from a ’desired’ inequality.

Proof.

The model maximize the overall continuous truth value, so the reward function is

where . We first want to prove that, if there exists a point that breaks the inequality with distance more than , then , indicating the model will not converge here. Without loss of generality, we assume the first point breaks the inequality, i.e.,

(9)

We will consider the following two cases.

() All the points breaks the inequality, i.e.

From Eq. (8), it is easy to see that

(10)

Then we have

() At least one point, say, , satisfies the inequality.

(11)

From Cauchy–Schwarz inequality, we have

So

(12)

Combining Eq. (9)(11)(12), a lower bound and an upper bound of can be obtained

(13)

Using basic calculus we can show that is strictly increasing in and , and strictly decreasing in . Combining Eq. (10)(12)(13), we have

(14)

The deduction of Eq. (14) requires , which can be obtained from the two known conditions and .

Eq. (14) provides a lower bound of the derivative for any point. For the point that breaks the inequality in Eq. (9), since is strictly increasing in , we can obtain a stronger lower bound

(15)

Put it altogether, we have

Some intermediate steps are omitted. Because we know , finally we have .

Now we have proved that no point can break the learned inequality more than . We need to prove at least one point is on or beyond the boundary. We prove this by contradiction. Suppose all points satisfy

Then we have

So the model does not converge, which concludes the proof. ∎

References

  • G. Ryan, J. Wong, J. Yao, R. Gu, and S. Jana (2020) CLN2INV: learning loop invariants with continuous logic networks. In International Conference on Learning Representations, External Links: Link Cited by: §1.