Many applications require dealing with transcendental functions (e.g., exponential, logarithm, sine, cosine). Nevertheless, the problem of Satisfiability Modulo the theory of transcendental functions comes with many difficulties. First, the problem is in general undecidable . Second, we may be forced to deal with irrational numbers - in fact, differently from polynomial, transcendental functions most often have irrational values for rational arguments. (See, for example, Hermite’s proof that is irrational for rational non-zero .)
In this paper, we describe a novel approach to Satisfiability Modulo the quantifier-free theory of (nonlinear arithmetic with) transcendental functions over the reals - SMT(NTA). The approach is based on an abstraction-refinement loop, using SMT(UFLRA) as abstract space, UFLRA being the combined theory of linear arithmetic on the rationals with uninterpreted functions. The Uninterpreted Functions are used to model nonlinear and transcendental functions. Then, we iteratively incrementally axiomatize the transcendental functions with a lemma-on-demand approach. Specifically, we eliminate spurious interpretations in SMT(UFLRA) by tightening the piecewise-linear envelope around the (uninterpreted counterpart of) the transcendental functions.
A key challenge is to compute provably correct approximations, also in presence of irrational numbers. We use Taylor series to exactly compute suitable accurate rational coefficients. We remark that nonlinear polynomials are only used to numerically compute the coefficients –i.e., no SMT solving in the theory of nonlinear arithmetic (SMT(NRA)) is needed– whereas the refinement is based on the addition, in the abstract space, of piecewise-linear axiom instantiations, which upper- and lower-bound the candidate solutions, ruling out spurious interpretations. To compute such piecewise-linear bounding functions, the concavity of the curve is taken into account. In order to deal with trigonometric functions, we take into account the periodicity, so that the axiomatization is only done in the interval between and . Interestingly, not only this is helpful for efficiency, but also it is required to ensure correctness.
Another distinguishing feature of our approach is a logical method to conclude the existence of a solution without explicitly constructing it. We use a sufficient criterion that consists in checking whether the formula is satisfiable under all possible interpretations of the uninterpreted functions (representing the transcendental functions) that are consistent with some rational interval bounds within which the correct values for the transcendental functions are guaranteed to exist. We encode the problem as a SMT(UFLRA) satisfiability check, such that an unsatisfiable result implies the satisfiability of the original SMT(NTA) formula.
We implemented the approach on top of the MathSAT SMT solver , using the PySMT library . We experimented with benchmarks from SMT-based verification queries over nonlinear transition systems, including Bounded Model Checking of hybrid automata, as well as from several mathematical properties from the MetiTarski  suite and from other competitor solver distributions. We contrasted our approach with state-of-the-art approaches based on interval propagation (iSAT3 and dReal), and with the deductive approach in MetiTarski. The results show that our solver compares favourably with the other solvers, being able to decide the highest number of benchmarks.
This paper is organized as follows. In §2 we describe some background. In §3 we overview the approach, defining the foundation for safe linear approximations. In §4 we describe the specific axiomatization for transcendental functions. In §5 we discuss the related literature, and in §6 we present the experimental evaluation. In §7 we draw some conclusions and outline directions for future work.
We assume the standard first-order quantifier-free logical setting and standard notions of theory, satisfiability, and logical consequence. As usual in SMT, we denote with LRA the theory of linear real arithmetic, with NRA that of non-linear real arithmetic, with UF the theory of equality (with uninterpreted functions), and with UFLRA the combined theory of UF and LRA. Unless otherwise specified, we use the terms variable and free constant interchangeably. We denote formulas with , terms with , variables with , functions with , each possibly with subscripts. If and are two variables, we denote with the formula obtained by replacing all the occurrences of in with . We extend this notation to ordered sequences of variables in the natural way. If is a model and is a variable, we write to denote the value of in , and we extend this notation to terms and formulas in the usual way. If is a set of formulas, we write to denote the formula obtained by taking the conjunction of all its elements. We write for .
A transcendental function is an analytic function that does not satisfy a polynomial equation (in contrast to an algebraic function [26, 15]). Within this paper we consider univariate exponential, logarithmic, and trigonometric functions. We denote with NTA the theory of (non-linear) real arithmetic extended with these transcendental functions.
A tangent line to a univariate function at a point of interest is a straight line that “just touches” the function at the point, and represents the instantaneous rate of change of the function at that one point. The tangent line to the function at point is the straight line defined as follows:
where is the first-order derivative of wrt. .
A secant line to a univariate function is a straight line that connects two points on the function plot. The secant line to a function between points and is defined as follows:
For a function that is twice differentiable at point , the concavity of at is the sign of its second derivative evaluated at . We denote open and closed intervals between two real numbers and as and respectively. Given a univariate function over the reals, the graph of is the set of pairs . We might sometimes refer to an element of the graph as a point.
Taylor Series and Taylor’s Theorem.
Given a function that has continuous derivatives at , the Taylor series of degree centered around is the polynomial:
where is the evaluation of -th derivative of at point . The Taylor series centered around is also called Maclaurin series.
According to Taylor’s theorem, any continuous function that is differentiable can be written as the sum of the Taylor series and the remainder term:
where is basically the Lagrange form of the remainder, and for some point between and it is given by:
The value of the point is not known, but the upper bound on the size of the remainder at a point
can be estimated by:
This allows to obtain two polynomials that are above and below the function at a given point , by considering and respectively.
3 Overview of the approach
Our procedure, which extends to SMT(NTA) and pushes further the approach presented in  for SMT(NRA), works by overapproximating the input formula with a formula over the combined theory of linear arithmetic and uninterpreted functions. The main algorithm is shown in Fig. 1. The solving procedure follows a classic abstraction-refinement loop, in which at each iteration, the current safe approximation of the input SMT(NTA) formula is refined by adding new constraints that rule out one (or possibly more) spurious solutions, until one of the following conditions occurs: (i) the resource budget (e.g. time, memory, number of iterations) is exhausted; or (ii) becomes unsatisfiable in SMT(UFLRA); or (iii) the SMT(UFLRA) satisfiability result for can be lifted to a satisfiability result for the original formula . An initial current precision is set (calling the function initial-precision), and this value is possibly increased at each iteration (calling maybe-increase-precision) according to the result of check-refine
and some heuristic.
In Fig. 1 we distinguish between two different refinement procedures: 1) check-refine, which is described below; 2) refine-extra, which is described in §4, where we provide further details on the treatment of each specific transcendental function that we currently support.
3.0.1 Initial Abstraction.
The function initial-abstraction takes in input an SMT(NTA) formula and returns a SMT(UFLRA) safe approximation of it. First, we flatten each transcendental function application in in which is not a variable by replacing with a fresh variable , and by conjoining to . Then, we replace each transcendental function in with a corresponding uninterpreted function , producing thus an SMT(UFLRA) formula . Finally, we add to some simple initial axioms for the different transcendental functions, expressing general, simple mathematical properties about them. We shall describe such axioms in §4.
If contains also non-linear polynomials, we handle them as described in : we replace each non-linear product with an uninterpreted function application , and add to the input formula some initial axioms expressing general, simple mathematical properties of multiplications. (We refer the reader to  for details.)
3.0.2 Spuriousness check and abstraction refinement.
The core of our procedure is the check-refine function, shown in Fig. 2.
First, if the formula contains also some non-linear polynomials, check-refine performs the refinement of non-linear multiplications as described in . In Fig. 2, this is represented by the call to the function check-refine-NRA at line LABEL:check-refine-tacas17, which may return some axioms to further constrain terms. If no non-linear polynomials occur in , then is initialized as the empty set.
Then, the function iterates over all the transcendental function applications in (lines LABEL:refine-begin-for–LABEL:refine-end-for), and checks whether the SMT(UFLRA)-model is consistent with their semantics.
Intuitively, in principle, this amounts to check that is equal to . In practice, however, the check cannot be exact, since transcendental functions at rational points typically have irrational values (see e.g. ), which cannot be represented in SMT(UFLRA). Therefore, for each in , we instead compute two polynomials, and , with the property that belongs to the open interval . The polynomials are computed using Taylor series, according to the given current precision, by the function poly-approx, which shall be described in §4.
If the model value for is outside the above interval, then the function get-lemmas-point is used to generate some linear lemmas that will remove the spurious point from the graph of the current abstraction of (line LABEL:call-refine-point).
If at least one point was refined in the loop of lines LABEL:refine-begin-for–LABEL:refine-end-for, the current set of lemmas is returned (line LABEL:refine-return-lemmas). If instead none of the points was determined to be spurious, the function check-model is called (line LABEL:call-check-model). This function tries to determine whether the abstract model does indeed imply the existence of a model for the original formula (more details are given below). If the check fails, we repeat the check-refine call with an increased precision (line LABEL:repeat-check-refine).
3.0.3 Refining a spurious point with secant and tangent lines.
Given a transcendental function application , the get-lemmas-point function generates a set of lemmas for refining the interpretation of by constructing a piecewise-linear approximation of around the point , using one of the polynomials and computed in check-refine. The kind of lemmas generated, and which of the two polynomials is used, depend on (i) the position of the spurious value relative to the correct value , and (ii) the concavity of around the point . If the concavity is positive (resp. negative) or equal to zero, and the point lies below (resp. above) the function, then the linear approximation is given by a tangent to the lower (resp. upper) bound polynomial (resp. ) at (lines LABEL:tangent-lemma-begin–LABEL:tangent-lemma-end of Fig. 3); otherwise, i.e. the concavity is negative (resp. positive) and the point is below (resp. above) the function, the linear approximation is given by a pair of secants to the lower (resp. upper) bound polynomial (resp. ) around (lines LABEL:secant-lemma-begin–LABEL:secant-lemma-end of Fig. 3). The two situations are illustrated in Fig. 4.
In the case of tangent refinement, the function get-tangent-bounds (line LABEL:get-tangent-bounds) returns an interval such that the tangent line is guaranteed not to cross the transcendental function . In practice, this interval can be (under)approximated quickly by exploiting known properties of the specific function under consideration. For example, for the exponential function get-tangent-bounds always returns ; for other functions, the computation can be based e.g. on an analysis of the (known, precomputed) inflection points of around the point of interest and the slope of the tangent line.
In the case of secant refinement, a second value, different from , is required to draw a secant line. The function get-previous-secant-points returns the set of all the points at which a secant refinement was performed in the past for . From this set, we take the two points closest to , such that and that do not cross any inflection point, 111For simplicity, we assume that this is always possible. If needed, this can be implemented e.g. by generating the two points at random while ensuring that and that do not cross any inflection point. and use those points to generate two secant lines and their validity intervals. Before returning the set of the two corresponding lemmas, we also store the new secant refinement point by calling store-secant-point.
3.0.4 Detecting satisfiable formulas.
The function check-model tries to determine whether the UFLRA-model for implies the satisfiability of the original formula . If, for all in , has a rational value at the rational point ,222Although, as mentioned above, this is not the case in general (see e.g. ), it is true for some special values, e.g. , . and is equal to , then can be directly lifted to a model for .
In the general case, we exploit this simple observation: we can still conclude that is satisfiable if we are able to show that is satisfiable under all possible interpretations of that are guaranteed to include also .
Using the model , we compute safe lower and upper bounds and for the function at point with the poly-approx function (see above). Let be the set of all terms occurring in . Let be the set of variables for , and be the set of all the function symbols in . Intuitively, if we can prove the validity of the following formula:
then the original formula is satisfiable.
In order to be able to use a quantifier-free SMT(UFLRA)-solver, we reduce the problem to the validity check of a pure UFLRA formula. Let be the set of all terms occurring in . We replace each occurrence of in with a corresponding fresh variable from a set . We then check the validity of the formula:
If is unsatisfiable, we conclude that is satisfiable. Clearly, this can be checked with a quantifier-free SMT(UFLRA)-solver, since is equivalent to , and can then be removed by Skolemization.
4 Abstraction Refinement for Transcendental Functions
In this section, we describe the implementation of the poly-approx and refine-extra for the transcendental functions that we currently support, namely and .333We remark that our tool (see §6) can handle also , , , , , by means of rewriting. We leave as future work the possibility of handling such functions natively.
uses the Maclaurin series of the corresponding transcendental function
and Taylor’s theorem
to find the lower and upper polynomials.
Essentially, this is done by expanding the series
(and the remainder approximation)
up to a certain ,
until the desired precision
(i.e. the difference between the upper and lower polynomials evaluated at )
Notice that, since
we can precisely evaluate the derivative of any order at
for both and ,444Because
(i) , , ,
(ii) for all , and
(iii) is if is odd and
is odd andotherwise. the computation of both the Maclaurin series and the remainder polynomial is always exact.
4.0.1 Exponential Function
The polynomial given by the Maclaurin series behaves differently depending on the sign of . For that reason, poly-approx distinguishes three cases for finding the polynomials and :
- Case :
since , we have ;
- Case :
we have that if is odd, and if is even (where ); we therefore set and for a suitable so that the required precision is met;
- Case :
we have that and when , therefore we set and for a suitable .
Since the concavity of is always positive, the tangent refinement will always give lower bounds for , and the secant refinement will give upper bounds. Moreover, as has no inflection points, get-tangent-bounds always returns .
The exponential function is monotonically increasing with a non-linear order. We check this property between two and terms in : if , but , then we add the following extra refinement lemma:
We add the following initial axioms to .
|Zero Tangent Line:|
4.0.2 Sin Function
The correctness of our refinement procedure relies crucially on being able to compute the concavity of the transcendental function at a given point . This is needed in order to know whether a computed tangent or secant line constitutes a valid upper or lower bound for around (see Fig. 3). In the case of the function, computing the concavity at an arbitrary point is problematic, since this essentially amounts to computing the remainder of and , which, being a transcendental number, cannot be exactly computed.
In order to solve this problem, we exploit another property of , namely its periodicity (with period ). More precisely, we split the reasoning about depending on two kinds of periods: base period and extended period. A period is a base period for the function if it is from to , otherwise it is an extended period. In order to reason about periods, we first introduce a symbolic variable , and add the constraint to , where and are valid rational lower and upper bounds for the actual value of (in our current implementation, we have and ). Then, we introduce for each term an “artificial” function application (where is a fresh variable), whose domain is the base period. This is done by adding the following constraints:
We call these fresh variables base variables. Notice that the second and the third constraint are saying that is the same as in the base period.
Let be the set of terms that have base variables as arguments, be the set of all terms, and . The tangent and secant refinement is performed for the terms in , while we add a linear shift lemma (described below) as refinement for the terms in . Using this transformation, we can easily compute the concavity of at by just looking at the sign of , provided that , where is the current lower bound for .555In the interval , the concavity of is the opposite of the sign of . In the case in which or , we do not perform the tangent/secant refinement, but instead we refine the precision of . For each , poly-approx tries to find the lower and upper polynomial using Taylor’s theorem, which ensures that:
where and . Therefore, we can set and .
For each with the corresponding base variable , we check whether the value after shifting to the base period is equal to the value of . We calculate the shift of as the rounding towards zero of , and we then compare with . If the values are different, we add the following shift lemma for relating with in the extended period :
In this way, we do not need the tangent and secant refinement for the extended period and we can reuse the refinements done in the base period. Note that even if the calculated shift value is wrong (due to the imprecision of with respect to the real value ), we may generate something useless but never wrong.
We also check the monotonicity property of , which can be described for the base period as: (i) the is monotonically increasing in the interval to ; (ii) the is monotonically decreasing in the intervals to and to . We add one of the constraints below if it is in conflict according to the current abstract model for some .
For each , we add the generic lower and upper bounds: . For each , we add the following axioms.
We use infinite-precision to represent rational numbers. In our (model-driven) approach, we may have to deal with numbers with very large numerators and/or denominators. It may happen that we get such rational numbers from the bad model for the variables appearing as arguments of transcendental functions. As a result of the piecewise-linear refinement, we will feed to the SMT(UFLRA) solver numbers that have even (exponentially) larger numerators and/or denominators (due to the fact that poly-approx uses power series). This might significantly slow-down the performance of the solver. We address this issue by approximating “bad” values with too large numerators and/or denominators by using continued fractions . The precision of the rational approximation is increased periodically over the number of iterations. Thus we delay the use numbers with larger numerator and/or denominator, and eventually find those numbers if they are really needed.
5 Related work
The approach proposed in this paper is an extension of the approach adopted in  for checking the invariants of transition systems over the theory of polynomial Nonlinear Real Arithmetic. In this paper we extend the approach to transcendental functions, with the critical issue of irrational valuations. Furthermore, we propose a way to prove SAT without being forced to construct the model.
In the following, we compare with related approaches found in the literature.
5.0.1 Interval propagation and DeltaSat.
The first approach to SMT(NTA) was pioneered by iSAT3 , that carries out interval propagation for nonlinear and transcendental functions. iSAT3 is both an SMT solver and bounded model checker for transition systems. A subsequent but very closely related approach is the dReal solver, proposed in . dReal relies on the notion of delta-satisfiability , which basically guarantees that there exists a variant (within a user-specified “radius”) of the original problem such that it is satisfiable. The approach cannot guarantee that the original problem is satisfiable, since it relies on numerical approximation techniques that only compute safe overapproximations of the solution space.
There are a few key insights that differentiate our approach. First, it is based on linearization, it relies on solvers for SMT(UFLRA), and it proceeds by incrementally axiomatizing transcendental functions. Compared to interval propagation, we avoid numerical approximation (even if within the bounds from DeltaSat). In a sense, the precision of the approximation is selectively detected at run time, while in iSAT3 and dReal this is a user defined threshold that is uniformly adopted in the computations. Second, our method relies on piecewise linear approximations, which can provide substantial advantages when approximating a slope – intuitively, interval propagation ends up computing a piecewise-constant approximation. Third, a distinguishing feature of our approach is the ability to (sometimes) prove the existence of a solution even if the actual values are irrationals, by reduction to an SMT-based validity check.
5.0.2 Deductive Methods.
The MetiTarski  theorem prover relies on resolution and on a decision procedure for NRA to prove quantified inequalities involving transcendental functions. It works by replacing transcendental functions with upper- or lower-bound functions specified by means of axioms (corresponding to either truncated Taylor series or rational functions derived from continued fraction approximations), and then using an external decision procedure for NRA for solving the resulting formulas. Differently from our approach, MetiTarski cannot prove the existence nor compute a satisfying assignment, while we are able to (sometimes) prove the existence of a solution even if the actual values are irrationals. Finally, we note that MetiTarski may require the user to manually write axioms if the ones automatically selected from a predefined library are not enough. Our approach is much simpler, and it is completely automatic.
The approach presented in , where the NTA theory is referred to as NLA, is similar in spirit to MetiTarski in that it combines the SPASS theorem prover  with the iSAT3 SMT solver. The approach relies on the SUP(NLA) calculus that combines superposition-based first-order logic reasoning with SMT(NTA). Similarly to our work, the authors also use a UFLRA approximation of the original problem. This is however done only as a first check before calling iSAT3. In contrast, we rely on solvers for SMT(UFLRA), and we proceed by incrementally axiomatizing transcendental functions instead of calling directly an NTA solver. Another similarity with our work is the possibility of finding solutions in some cases. This is done by post-processing an inconclusive iSAT3 answer, trying to compute a certificate for a (point) solution for the narrow intervals returned by the solver, using an iterative analysis of the formula and of the computed intervals. Although similar in spirit, our technique for detecting satisfiable instances is completely different, being based on a logical encoding of the existence of a solution as an SMT(UFLRA) problem.
5.0.3 Combination of interval propagation and theorem proving.
Gappa [9, 18] is a standalone tool and a tactic for the Coq proof assistant, that can be used to prove properties about numeric programs (C-like) dealing with floating-point or fixed-point arithmetic. Another related Coq tactic is Coq.Interval . Both Gappa and Coq.Interval combine interval propagation and Taylor approximations for handling transcendental functions. A similar approach is followed also in , where a tool written in Hol-Light to handle conjunctions of non-linear equalities with transcendental functions is presented. The work uses Taylor polynomials up to degree two. NLCertify  is another related tool which uses interval propagation for handling transcendental functions. It approximates polynomials with sums of squares and transcendental functions with lower and upper bounds using some quadratic polynomials . Internally, all these tools/tactics rely on multi-precision floating point libraries for computing the interval bounds.
A similarity between these approaches and our approach is the use of the Taylor polynomials. However, one distinguishing feature is that we use them to find lower and upper linear constraints by computing tangent and secant lines. Moreover, we do not rely on any floating point arithmetic library, and unlike the mentioned approaches, we can also prove the existence of a solution. On the other hand, some of the above tools employ more sophisticated/specialised approximations for transcendental functions, which might allow them to succeed in proving unsatisfiability of formulas for which our technique is not sufficiently precise.
Finally, since we are in the context of SMT, our approach also has the benefits
(i) fully automatic, unlike some of the above which are meant to be
used within interactive theorem provers;
(ii) able to deal with formulas with an arbitrary Boolean structure,
and not just conjunctions of inequalities; and
(iii) capable of handling combinations of theories (including
uninterpreted functions, bit-vectors, arrays), which are beyond what
the above, more specialised tools, can handle.
capable of handling combinations of theories (including uninterpreted functions, bit-vectors, arrays), which are beyond what the above, more specialised tools, can handle.
6 Experimental Analysis
The approach has been implemented on top of the MathSAT SMT solver , using the PySMT library . We use the GMP infinite-precision arithmetic library to deal with rational numbers. Our implementation and benchmarks are available at https://es.fbk.eu/people/irfan/papers/cade17-smt-nta.tar.gz.
We have run our experiments on a cluster equipped with 2.6GHz Intel Xeon X5650 machines, using a time limit of 1000 seconds and a memory limit of 6 Gb.
We have run MathSAT in two configurations: with and without universal check for proving SAT (resp. called MathSAT and MathSAT-noUniSAT).
The other systems used in the experimental evaluation are dReal , iSAT3 , and MetiTarski , in their default configurations (unless otherwise specified). Both iSAT3 and dReal were also run with higher precision than the default one. The difference between the two configurations is rather modest and, when run with higher precision, they decrease the number of MaybeSat answers. MetiTarski can prove the validity of quantified formulae, answering either valid or unknown. As such, it is unfair to run it on satisfiable benchmarks. In general, we interpret the results of the comparison taking into account the features of the tools.
We consider three classes of benchmarks. First, the bounded model checking (BMC) benchmarks are the results of unrolling transition systems with nonlinear and transcendental transition relations, obtained from the discretization of hybrid automata. We took benchmarks from the distributions of iSAT3, from the discretization (by way of HyComp  and nuXmv ) of benchmarks from  and from the hybrid model checkers HyST  and Hare. Second, the Mathematical benchmarks are taken from the MetiTarski distribution. These are benchmarks containing quantified formulae over transcendental functions, and are all valid, most of them corresponding to known mathematical theorems. We selected the MetiTarski benchmarks without quantifier alternation and we translated them into quantifier-free SMT(NTA) problems. The third class of benchmarks consists of 944 instances from the dReal distribution that contain transcendental functions.
Both the mathematical and the dReal benchmarks contain several transcendental functions (, , …) that are not supported natively by our prototype. We have therefore applied a preprocessing step that rewrites those functions in terms of and .666Sometimes we used a relational encoding: e.g. if contains , we rewrite it as , where is a fresh variable. iSAT3 requires bounds on the variables and it is unable to deal with the benchmarks above (that either do not specify any bound or specify too wide bounds for the used variables). Thus, we scaled down the benchmarks so that the variables are constrained in the interval since for higher bounds iSAT3 raises an exception due to reaching the machine precision limit. Finally, for the BMC benchmarks, we run iSAT3 in BMC mode, in order to ensure that its optimized unrolling is activated.
6.0.4 BMC and Mathematical Results.
In Table 1
, we present the results. The benchmarks are classified as eitherSat or Unsat when at least one of the solvers has been able to return a definite answer. If only MaybeSat answers are returned, then the benchmark is classified as Unknown. For each tool, we report the number of answers produced within the used resource limits. For the MaybeSat benchmarks, the numbers in parentheses indicate the instances which have been classified as Sat/Unsat by at least one other tool. For example, an entry “87 (32/7)” means that the tool returned MaybeSat for 87 instances, of which 32 were classified as Sat and 7 Unsat by some other tool.777There was no case in which two tools reported Sat and Unsat for the same benchmark.
|Benchmarks||Bounded Model Checking (887)||Mathematical (681)|
|dReal||n.a.||392||281 (67/23)||n.a.||285||316 (0/253)|
|Benchmarks||Scaled Bounded Model Checking (887)||Scaled Mathematical (681)|
|iSAT3||35||470||87 (32/7)||0||212||137 (0/115)|
|dReal||n.a.||403||251 (77/23)||n.a.||302||245 (0/195)|
First, we notice that the universal SAT technique directly results in 72 benchmarks proved to be satisfiable by MathSAT, without substantial degrade on the Unsat benchmarks. Second, we notice that MetiTarski is very strong to deal with its own mathematical benchmarks, but is unable to deal with the BMC ones, which contain features that are beyond what it can handle (Boolean variables and tens of real variables).888According to the documentation of MetiTarski, the tool is ineffective for problems with more than 10 real variables. Our experiments on a subset of the instances confirmed this.
In the lower part of Table 1, we present the results on the scaled-down benchmarks, so that iSAT3 can be run. The results for dReal and MathSAT are consistent with the ones obtained on the original benchmarks – the benchmarks are slightly simplified for MathSAT, that solves 12 more Sat instances and 2 more Unsat ones, and for dReal, that solves 11 more Unsat instances. The performance of iSAT3 is quite good, halfway between dReal and MathSAT on the bounded model checking benchmarks, and slightly lower than MathSAT on the mathematical ones. In the BMC benchmarks, iSAT3 is able to solve 35 Sat and 470 Unsat instances, 102 more than dReal and 135 less than MathSAT.
The MaybeSat results need further analysis. We notice that both iSAT3 and dReal often return MaybeSat on unsatisfiable benchmarks (e.g. all the mathematical ones are Unsat). There are many cases where dReal returns a DeltaSat result, but at the same time it prints an error message stating that the numerical precision limit has been reached. Thus, it is unlikely that the result is actually DeltaSat, but it should rather be interpreted as MaybeSat in these cases.999We contacted the authors of dReal and they reported that this issue is currently under investigation.
|Benchmarks||dReal (all) (944)|
|Benchmarks||dReal (exp/sin only) (96)|
|dReal (orig.)||n.a.||17||37 (3/3)|
6.0.5 dReal Benchmarks Results.
The dReal benchmarks turn out to be very hard. The results are reported in Table 2, where we show the performance of dReal both on the original benchmarks and on the ones resulting from the removal via pre-processing of the transcendental functions not directly supported by MathSAT. The results shows that in the original format dReal solves many more instances, and this suggests that dealing with other transcendental functions in a native manner may lead to substantial improvement in MathSAT too. Interestingly, if we focus on the subset of 96 benchmarks that only contain and (and are dealt by MathSAT without the need of preprocessing), we see that MathSAT is significantly more effective than dReal in proving unsatisfiability, solving more than twice the number of instances (right part of Table 2).
We conclude by noticing that overall MathSAT solves 906 benchmarks out of 2512, 127 more than dReal, the best among the other systems. A deeper analysis of the results (not reported here for lack of space) shows that the performance of the solvers is complementary: the “virtual-best system” solves 1353 benchmarks. This suggests that the integration of interval propagation may yield further improvements.
We present a novel approach to Satisfiability Modulo the theory of transcendental functions. The approach is based on an abstraction-refinement loop, where transcendental functions are represented as uninterpreted ones in the abstract space SMT(UFLRA), and are incrementally axiomatized by means of piecewise-linear functions. We experimentally evaluated the approach on a large and heterogeneous benchmark set: the results demonstrates the potential of our approach, showing that it compares favorably with both delta-satisfiabily and interval propagation and with methods based on theorem proving.
In the future we plan to exploit the solver for the verification of infinite-state transition systems and hybrid automata with nonlinear dynamics, and for the analysis of resource consumption in temporal planning. Finally we would like to define a unifying framework to compare linearization and interval propagation, and to exploit the potential synergies.
-  Akbarpour, B., Paulson, L.C.: Metitarski: An automatic theorem prover for real-valued special functions. JAR 44(3), 175–205 (2010)
-  Allamigeon, X., Gaubert, S., Magron, V., Werner, B.: Certification of inequalities involving transcendental functions: combining sdp and max-plus approximation. In: Control Conference (ECC), 2013 European. pp. 2244–2250. IEEE (2013)
-  Bak, S., Bogomolov, S., Johnson, T.T.: HYST: a source transformation and translation tool for hybrid automaton models. In: Proceedings of the 18th International Conference on Hybrid Systems: Computation and Control. pp. 128–133. ACM (2015)
-  Cavada, R., Cimatti, A., Dorigatti, M., Griggio, A., Mariotti, A., Micheli, A., Mover, S., Roveri, M., Tonetta, S.: The nuXmv symbolic model checker. In: CAV. Springer (2014)
-  Cimatti, A., Griggio, A., Irfan, A., Roveri, M., Sebastiani, R.: Invariant checking of NRA transition systems via incremental reduction to LRA with EUF. In: Legay and Margaria , pp. 58–75, also available at https://es-static.fbk.eu/people/griggio/papers/tacas17.pdf
-  Cimatti, A., Griggio, A., Mover, S., Tonetta, S.: HyComp: An SMT-based model checker for hybrid systems. In: TACAS. pp. 52–67. Springer (2015)
-  Cimatti, A., Griggio, A., Schaafsma, B.J., Sebastiani, R.: The MathSAT5 SMT solver. In: TACAS. LNCS, vol. 7795, pp. 93–107. Springer (2013)
-  Cimatti, A., Mover, S., Sessa, M.: From electrical switched networks to hybrid automata. In: FM. pp. 164–181. Springer (2016)
-  de Dinechin, F., Lauter, C., Melquiond, G.: Certifying the floating-point implementation of an elementary function using gappa. IEEE Trans. Comput. 60(2), 242–253 (Feb 2011)
-  Eggers, A., Kruglov, E., Kupferschmid, S., Scheibler, K., Teige, T., Weidenbach, C.: Superposition modulo non-linear arithmetic. In: FroCoS. LNCS, vol. 6989, pp. 119–134. Springer (2011)
-  Fränzle, M., Herde, C., Teige, T., Ratschan, S., Schubert, T.: Efficient solving of large non-linear arithmetic constraint systems with complex boolean structure. JSAT 1(3-4), 209–236 (2007)
-  Gao, S., Avigad, J., Clarke, E.M.: -complete decision procedures for satisfiability over the reals. In: IJCAR. pp. 286–300. Springer (2012)
-  Gao, S., Kong, S., Clarke, E.M.: dReal: An SMT solver for nonlinear theories over the reals. In: CADE. pp. 208–214. Springer (2013)
-  Gario, M., Micheli, A.: PySMT: a solver-agnostic library for fast prototyping of SMT-based algorithms. In: SMT. pp. 373–384 (2015)
-  Hazewinkel, M.: Encyclopaedia of Mathematics: Stochastic Approximation — Zygmund Class of Functions. Encyclopaedia of Mathematics, Springer Netherlands (1993), https://books.google.it/books?id=1ttmCRCerVUC
-  Legay, A., Margaria, T. (eds.): Tools and Algorithms for the Construction and Analysis of Systems - 23rd International Conference, TACAS 2017, Held as Part of ETAPS 2017, Uppsala, Sweden, April 22-29, 2017, Proceedings, Part I, Lecture Notes in Computer Science, vol. 10205 (2017)
-  Magron, V.: Nlcertify: A tool for formal nonlinear optimization. In: International Congress on Mathematical Software. pp. 315–320. Springer (2014)
Martin-Dorel, É., Melquiond, G.: Proving tight bounds on univariate expressions with elementary functions in coq. Journal of Automated Reasoning 57(3), 187–217 (2016)
-  Melquiond, G.: Coq-interval (2011)
Nemhauser, G.L., Wolsey, L.A.: Integer and Combinatorial Optimization. Wiley-Interscience, New York, NY, USA (1988)
-  Nieven, I.: Numbers: rational and irrational. Mathematical Association of America (1961)
-  Ratschan, S.: Efficient solving of quantified inequality constraints over the real numbers. TOCL 7(4), 723–748 (2006)
-  Roohi, N., Prabhakar, P., Viswanathan, M.: HARE: A hybrid abstraction refinement engine for verifying non-linear hybrid automata. In: Legay and Margaria , pp. 573–588
-  Scheibler, K., Kupferschmid, S., Becker, B.: Recent Improvements in the SMT Solver iSAT. MBMV 13, 231–241 (2013)
-  Solovyev, A., Hales, T.C.: Formal verification of nonlinear inequalities with taylor interval approximations. In: NASA Formal Methods Symposium. pp. 383–397. Springer (2013)
-  Townsend, E.: Functions of a Complex Variable. Read Books (2007)
-  Weidenbach, C., Dimova, D., Fietzke, A., Kumar, R., Suda, M., Wischnewski, P.: SPASS version 3.5. In: CADE. LNCS, vol. 5663, pp. 140–145. Springer (2009)