1 Introduction
Software verification tools operate by converting their input (e.g., a program source code) into a format understandable by an automated theorem prover, encoding highlevel program properties (e.g., arithmetic overflow) and algorithms (e.g., bounded model checking) into lowlevel equations (e.g., SMT). The encoding process of a program usually involves several intermediate steps, designed to generate a formula that can be efficiently solved by the theorem provers. In this domain, the analysis of programs with floatingpoint arithmetic has received much attention, primarily when safety depends on the correctness of these programs. In essence, the Ariane 5 rocket exploded midair in 1996 due to an exception thrown by an invalid floatingpoint conversion [41]. It is a complex problem because the semantics may change beyond code level, including the optimization performed by compilers [44].
There exist various static analysis tools that are able to check for floatingpoint computations [9, 10, 21, 22, 30, 50, 54]. For example, Astr e is a static analysis tool that considers all possible rounding errors when verifying C programs with floatingpoint numbers [9]. It has been applied to verify embedded software in the flight control software of the Airbus. CBMC [21] is also another notable example of a software model checking tool, which implements a bitprecise decision procedure for the theory of floatingpoint arithmetic [11]. In particular, CBMC is the main verification engine employed by other software verifiers that efficiently verify C programs with floatingpoint numbers such as PeSCo [51] and VeriAbs [19]. It is a challenging verification task to prove the correctness of C programs with floatingpoints mainly because of / bits floatingpoint computations. Given the current knowledge in software verification, there exists no other study that shows a thorough comparative evaluation of software verifiers and SMT solvers concerning the verification of C programs that contain floatingpoints.
Here we present the new floatingpoint technologies developed in one bounded model checker, ESBMC [31], and evaluate it using a large set of floatingpoint benchmarks [7]. In particular, we evaluate a new floatingpoint API on top of our SMT backend that extends the floatingpoint feature to all solvers supported by ESBMC (including Boolector [14] and Yices [26] that currently do not support the SMT FP logic [13]). For evaluation, we used the benchmarks of the 2020 International Competition on Software Verification (SVCOMP) [7], from the floatingpoint subcategory. The five different solvers supported by ESBMC were evaluated (Z3 [25], Yices [26], Boolector [14], MathSAT [20], and CVC4 [4]) and ESBMC is able to evaluate more benchmarks within the usual time and memory limits ( minutes and GB, respectively) when using Boolector. In particular, results show that Boolector can solve more floatingpoint problems using the new floatingpoint API than MathSAT or Z3, which have native floatingpoint APIs. Our experimental results also show that our floatingpoint API implemented in ESBMC is competitive to other stateoftheart software verifiers, including CBMC [21], PeSCo [51], and VeriAbs [19].
2 Floatingpoint Arithmetic
The manipulation of real values in programs is a necessity in many fields, e.g., scientific programming [44]. The set of real numbers, however, is infinite, and some numbers cannot be represented with finite precision, e.g., irrational numbers. Over the years, computer manufacturers have experimented with different machine representations for real numbers [34]. The two fundamental ways to encode a real number are the fixedpoint representation, usually found in embedded microprocessors and microcontrollers [28], and the floatingpoint representation, in particular, the IEEE floatingpoint standard (IEEE 7542008 [36]), which has been adopted by many processors [35].
Each encoding can represent a range of real numbers depending on the wordlength and how the bits are distributed. A fixedpoint representation of a number consists of an integer component, a fractional component, and a bit for the sign. In contrast, the floatingpoint representation consists of an exponent component, a significand component, and a bit for the sign. Floatingpoint has a higher dynamic range than fixedpoint (e.g., a float in C has bits of precision, but can have values up to ), while fixedpoint can have higher precision than floatingpoint [47]. Furthermore, the IEEE floatingpoint standard defines values that have no equivalent in a fixedpoint or real encoding, e.g., positive and negative infinities. In general, IEEE floatingpoints are of the following kinds: zeroes, NaNs, infinities, normal, denormal (or subnormal) [36].
Definition 1
(Infinities) Both +inf and inf are defined in the standard. These floatingpoints represent overflows or the result of nonzero floatingpoint divisions by zero (Annex F of the C language specification [38]).
Definition 2
(Zeroes) Both +0 and 0 are defined in the standard. Most of the operations will behave identically when presented with +0 or 0 except when extracting the sign bit or dividing by zero (usual rules about signedness apply and will result in either +inf or inf). Equalities will even be evaluated to true when comparing positive against negative zeroes.
Definition 3
(NaNs) The Not a Number special values represent undefined or unrepresentable values, e.g., or 0.f/0.f. As a safety measure, most of the operations will return NaN if at least one operator is NaN, as a way to indicate that the computation is invalid. NaNs are not comparable: except for the not equal operator (!=), all other comparisons will evaluate to false (even comparing a NaN against itself). Furthermore, casting NaNs to integers is undefined behavior.
Definition 4
(Normal) A nonzero floatingpoint that can be represented within the range supported by the encoding.
Definition 5
(Denormal (or subnormal)) A nonzero floatingpoint representing values very close to zero, filling the gap between what can be usually represented by the encoding and zero.
The IEEE standard also defines five kinds of exceptions, to be raised under specific conditions, which are: invalid operation, overflow, division by zero, underflow, and inexact [36].
Exception 1
(Invalid Operation) This exception is raised when the operation produces a NaN as a result.
Exception 2
(Overflow) This exception is raised when the result of an operation is too large to be represented by the encoding. By default, these operations return inf.
Exception 3
(Division By Zero) It is raised by x/0, for x0. By default, these operations return inf.
Exception 4
(Underflow) Raised when the result is too small to be represented by the encoding. The result is a denormal floatingpoint.
Exception 5
(Inexact) This exception is raised when the encoding cannot represent the result of an operation unless it is rounded. By default, these operations will round the result.
The standard defines five rounding modes. Given a real number , a rounded floatingpoint will be rounded using: Round Toward Positive (RTP), Round Toward Negative (RTN), Round Toward Zero (RTZ), Round to Nearest ties to Even (RNE), and Round to Nearest ties Away from zero (RNA):
Mode 1
(RTP) is the least floatingpoint value .
Mode 2
(RTN) is the greatest floatingpoint value .
Mode 3
(RTZ) is the floatingpoint with the same sign of , such that is the greatest floatingpoint value .
Mode 4
(RNE) is the floatingpoint value closest to ; if two floatingpoint values are equidistant to , is the one which the least significant bit is zero.
Mode 5
(RNA) is the floatingpoint value closest to ; if two floatingpoint values are equidistant to , is the one further away from zero.
The standard also defines some arithmetic operations (add, subtract, multiply, divide, square root, fused multiplyadd, remainder), conversions (between formats, to and from strings), and comparisons and total ordering. In particular, the standard defines how floatingpoint operations are to be encoded using bitvectors. Table
1 shows four primitive types usually available in the x86 family of processors that follow the standard; each type is divided into three parts: one bit for the sign, an exponent, and a significant part which depends on the bit length of the type. The significands also include a hidden bit: a 1 bit that is assumed to be the leading part of the significand, unless the floatingpoint is denormal.Name  Common Name  Size 

(exponent + significand)  
fp16  Half precision  16 (5 + 10) 
fp32  Single precision  32 (8 + 23) 
fp64  Double precision  64 (11 + 53) 
fp128  Quadruple precision  128 (15 + 113) 
In Annex F of the C language specification [38], fp32 and fp64 are defined as float and double. The standard does not define any types for fp16, and compilers usually implement two formats: __fp16 as defined in the ARM C language extension (ACLE) [2] and _Float16 as defined by the ISO/IEC 186613:2015 standard [39]. While __fp16 is only a storage and interchange format (meaning that it is promoted when used in arithmetic operations), _Float16 is an actual type, and arithmetic operations are performed using halfprecision. The standard only weakly specifies how an fp128 (long double in C) should be implemented, and compilers usually implement it using an 80bit long double extended precision format [35].
Floatingpoints are represented as . Here, and is the scaling factor [49]. Regular floatingpoints are encoded assuming that the leading hidden bit is 1 and the exponent is in the range , e.g., the number 0.125 is represented as in the floatingpoint format. Denormals are encoded assuming that the leading hidden bit is zero and the exponent is . Zeros are represented as an allzero bitvector (except for the sign bit if the zero is negative). Finally, a bitvector with the exponent equal to and significand all zero is an infinity. In contrast, a bitvector with an exponent equal to and significand not zero is a NaN.
3 Verifying C programs FloatingPoint Arithmetic
When ESBMC was created, all floatingpoint types and operations were encoded using fixedpoints [1, 5, 6, 18, 37]. A fixedpoint number is represented in ESBMC as a pair where is the total number of bits and is the number of fractional bits, e.g., the number 0.125 is represented as (assuming it is 8 bits long) in the fixedpoint format. The fixedpoint arithmetic is performed similarly to the bitvector arithmetic, except that the operations are applied separately to the integral and fractional parts of the fixedpoints and concatenated at the end (overflow in the fractional parts are treated accordingly). Different from floatingpoints, all bitvectors represent one number in the real domain.
The lack of proper floatingpoint encoding, however, meant that ESBMC was unable to accurately verify an entire class of programs, such as the famous floatingpoint “issue” [55] illustrated in Figure 1.
The assertion in line 1 holds if the program is encoded using fixedpoint arithmetic, but fails if floatingpoint arithmetic is used. The assertion violation arises from the fact that floatingpoints in the IEEE standard are represented as whole numbers a power of two, so the only numbers that use a prime factor of the base two that can be correctly expressed as fractions. Since in binary (or base 2) the only prime factor is 2, only , , , would be correctly expressed as decimals, so the constants , , (or , , ) in the program are only approximations. In the program in Figure 1, the constants are:

x is 0.1000000000000000055511151231257827021181583404541015625

y is 0.200000000000000011102230246251565404236316680908203125

w is 0.3000000000000000444089209850062616169452667236328125

z is 0.299999999999999988897769753748434595763683319091796875
The discrepancy happens in the C program because the closest floatintpoint to 0.3 is smaller than the real 0.3 but the closest floatingpoint to 0.2 is greater than the real 0.2, so adding the floatingpoints 0.1 and 0.2 results in a floatingpoint slightly greater than the constant floatingpoint 0.3.
To address this limitation, ESBMC was extended to support floatingpoint arithmetic [32] but was only able to encode it using SMT solvers that offered native support for the floatingpoint theory, i.e., Z3 and MathSAT. A floatingpoint is represented in ESBMC following the IEEE754 standard for the size of the exponent and significand precision. For example, a halfprecision floatingpoint (16 bits) has 1 bit for the sign, 5 bits for the exponent, and 11 bits for the significand (1 hidden bit) [36].
The work described in this paper, namely a new floatingpoint API in our SMT backend, is the natural evolution of our research: the support of floatingpoint arithmetic for the remaining SMT solvers in ESBMC (Boolector [46], Yices [26], and CVC4 [4]). The new floatingpoint API works by converting all floatingpoint types and operations to bitvectors (a process called bitblasting), thus extending the support for floatingpoint arithmetic to any solver that supports bitvector arithmetic [33].
3.1 Bitblasting FloatingPoint Arithmetic
The SMT FP logic is an addition to the SMT standard, first proposed in 2010 by Rümmer and Wahl [52]. The current version of the theory largely follows the IEEE standard 7542008 [36]. It formalizes floatingpoint arithmetic, positive and negative infinities and zeroes, NaNs, relational and arithmetic operators, and five rounding modes: round nearest with ties choosing the even value, round nearest with ties choosing away from zero, round towards positive infinity, round towards negative infinity and round towards zero.
There exist some functionalities from the IEEE standard that are not yet supported by the FP logic as described by Brain et al. [13]; however, when encoding C programs using the FP logic, most of the process is a onetoone conversion, as we described in our previous work [32].
Encoding programs using the SMT floatingpoint theory has several advantages over a fixedpoint encoding. However, the main one is the correct modeling of ANSIC/C++ programs that use the IEEE floatingpoint arithmetic. ESBMC ships with models for most of the current C11 standard functions [38]; floatingpoint exception handling, however, is not yet supported.
The encoding algorithms, however, can be very complex, and it is not uncommon to see the SMT solvers struggling to support every corner case [27, 48]. Currently, three SMT solvers support the SMT floatingpoint theory: Z3 [25], MathSAT [20], and CVC4 [4]; ESBMC implements the floatingpoint encoding for all of them using their native API. Regarding the support from the solvers, Z3 implements all operators, MathSAT implements all but two: fp.rem (remainder operator) and fp.fma (fused multiplyadd) and CVC4 implements all but the conversions to other sorts.
The three solvers offer two (nonstandard) functions to reinterpret floatingpoints to and from bitvectors: fp_as_ieeebv and fp_from_ieeebv, respectively. These functions can be used to circumvent any lack of operators, and only require the user to write the missing operators. Note that this is different from converting floatingpoints to bitvectors and viceversa: converting to bitvectors follows the rounding modes defined by the IEEE754 standard while reinterpreting floatingpoint as bitvectors returns the bitvector format of the floatingpoint. We use these functions in our backend to implement the fused multiplyadd operator for MathSAT.
The implementation of the floatingpoint API is based on the encoding of Muller et al. [36, 45], however, before we can discuss the algorithms in the floatingpoint API, we first need to describe the basic operations performed by most of them, the fourstage pipeline [12]: unpack, operate, round, and pack.

Unpack stage: the floatingpoint is split into three bitvectors, one for the sign, one for the exponent, and one for the significand. In our floatingpoint API, the unpack operation also adds the hidden bit to the significand, unbias the exponent. It offers an option to normalize subnormals exponents and significands if requested.

Operate stage: in this stage, conversion and arithmetic operations are performed in the three bitvectors. Depending on the operation, the bitvectors need to be extended, e.g., during a fused multiplyadd operation, the significand has length 2 * sb + 3, and the exponent has length eb + 2.

Round stage: since the previous stage was performed using extended bitvectors, this stage needs to round the bitvectors back to the nearest representable floatingpoint of the target format. Here, guard and sticky bits in the significand are used to determine how far the bitvector is from the nearest representable, and the rounding mode is used to determine in which direction the floatingpoint will be rounded. The exponent bitvector is also checked for under or overflow when rounding, to create the correct floatingpoint, e.g., infinity might be created if the exponent is too large for the target format.

Pack stage: in the final stage, the three bitvectors are concatenated to form the final floatingpoint.
The fourstage pipeline will be used when performing operations with the floatingpoints. We grouped the operations into seven groups: sorts constructors, rounding modes constructors, value constructors, classification operators, comparison operators, conversion operators, and arithmetic operators.
In the three constructors groups (sorts, rounding modes, and value), the floatingpoints are encoded using bitvectors:
Sorts constructors. The sorts follow the definitions in Table 1 for the bitvector sizes. We do not support the 80bit long double extended precision format used in some processors [35]; instead, we use 128 bits for quadruple precision.
Rounding mode constructors. The floatingpoint API supports all rounding modes described in Section 2, even though the C standard does not support RNA [38]. These are encoded as 3bits long bitvectors.
Value constructors. Floatingpoint literals, plus and minus infinity, plus and minus zeroes and NaNs can be created. For the later, the same NaN is always created (positive, the significand is ). All values are bitvectors with total length 1 + eb + sb, where eb is the number of exponent bits and sb is the number of significand bits. All algorithms in the floatingpoint API assume one hiddenbit in the significand.
The remaining four operators groups use at least one of the stages in the pipeline to reason about floatingpoints:
Classification operators.
Algorithms to classify normals, subnormals, zeros (regardless of sign), infinities (regardless of sign), NaNs, and negatives and positives. The operators work by unpacking the floatingpoint and comparing the bitvectors against the definitions.
Comparison operators. The operators “greater than or equal to”, “greater than”, “less than or equal to”, “less than”, and “equality” are supported. The first three are written in terms of the last two. All of them evaluate to false if one of their arguments is NaN; this check is done using the NaN classification operator.
Conversion operators. The floatingpoint API can convert:

Floatingpoints to signed bitvectors and floatingpoints to unsigned bitvectors: converts the floatingpoint to bitvectors always rounding towards zero. These operations generate a free variable if it can not represent the floatingpoint using the target bitvector, i.e., if the floatingpoint is outofrange, NaN or infinity. Minus zero is converted to zero.

Floatingpoints to another floatingpoint: converts the floatingpoint to a different format using a rounding mode. NaN, infinity, and zeroes are always convertible between floatingpoints, but converting between formats might create infinity if the target format can not represent the original floatingpoint.

Signed bitvectors to floatingpoints and unsigned bitvectors to floatingpoints: converts bitvectors to the nearest representable floatingpoint, using a rounding mode. It might create infinity if the target format can not represent the original bitvector.
Arithmetic operators. The floatingpoint API implements:

Absolute value operator: sets the sign bit of the floatingpoint to 0.

Negation operator: flips the sign bit of the floatingpoint.

Addition operator: the significands are extended by 3 bits to perform the addition and the exponent are extended by 2 bits to check for overflows. The algorithm first aligns the significands then it adds them.

Subtraction operator: negates the righthand side of the expression and uses the addition operator, i.e., .

Multiplication operator: the length of the significand bitvectors are doubled before multiplying them, and the exponents are added together. The final sign bit is the result of xor’ing the sign of both operands of the multiplication.

Division operator: the length of both significand and exponent are extended by 2 bits, then bitvector subtractions are used to calculate the target significand and exponent.

Fused multiplyadd: the significand is extended to length 2 * sb + 3 to accommodate both the multiplication and the addition, and the exponent is extended by 2 bits. The first two operands are multiplied, and the result is aligned with the third operand before adding them.

Square root operator: neither the significand nor the exponent is extended since the result always fits the original format and can never underflow or overflows as per the operator definition. Here, , where the final exponent is half the unbiased exponent minus the leading zeros, and is calculated using a restorative algorithm [45, Chapter 10].
All operators but the absolute value and negation handle special values (NaN, infinity, and zeroes) before performing the operations, e.g., in the multiplication operator, if the lefthand side argument is positive infinity, the result is NaN if the righthand side argument is 0; otherwise, the result is an infinity with the righthand side argument sign. Furthermore, all arithmetic operations in the floatingpoint API that take more than one floatingpoint as an argument assume that the floatingpoints have the same format. This assumption is not a problem when converting C programs, as type promotion rules already ensure this precondition [38].
A detailed table with all the supported features of the floatingpoint API, and the comparison with the features from other solvers can be found in Appendix 0.A.
4 Experimental Evaluation
Our experimental evaluation consists of three parts. In Section 4.1, we present the benchmarks used to evaluate the implementation of our floatingpoint API. In Section 4.2, we compare the verification results of the new floatingpoint API in ESBMC using several solvers. In Section 4.3, we compare the best solver found in Section 4.2 against all the tools that competed in the ReachSafetyFloats subcategory in SVComp 2020. Our experimental evaluation aims to answer two research questions:

(Soundness and completeness) Is our floatingpoint API sound and complete?

(Performance) How does the implementation of our floatingpoint API compare to other software verifiers?
4.1 Experimental Setup
We evaluate our approach using all the verification tasks in SVCOMP 2020 [7]. In particular, we considered benchmarks for the subcategory ReachSafetyFloats, described as “containing tasks for checking programs with floatingpoint arithmetics”.
The ReachSafetyFloats subcategory is part of the ReachSafety category. In this category, a function call is checked for reachability; the property is formally defined in the competition as G ! call(__VERIFIER_error()) or “The function __VERIFIER_error() is not called in any finite execution of the program”.
We have implemented our floatingpoint API in ESBMC. We run ESBMC on each benchmark in that subcategory once per solver, with the following set of options: nodivbyzerocheck, which disables the division by zero check (an SVCOMP requirement); incrementalbmc, which enables the incremental BMC; unlimitedksteps, which removes the upper limit of iteration steps in the incremental BMC algorithm; floatbv, which enables SMT floatingpoint encoding; 32, which assumes a 32 bits architecture; and forcemallocsuccess, which forces all dynamic allocations succeed to (also an SVCOMP requirement). We also disable pointer safety checks and array bounds check (nopointercheck, noboundscheck) as, per the competition definition, these benchmarks only have reachability bugs. Finaly, in order to select an SMT solver for verification, the options boolector, z3, cvc, mathsat, and yices are used.
All experiments were conducted on our mini cluster at the University of Manchester, UK. The compute node used are equipped with Intel(R) Xeon(R) CPU E52620 v4 @ 2.10GHz and GB of RAM, where nine instances of ESBMC were executed in parallel. For each benchmark, we set time and memory limits of seconds and GB, respectively, as per the competition definitions. We, however, do not present the results as scores (as it is done in SVCOMP) but show the number of correct and incorrect results, and the verification time.
4.2 FloatingPoint API evaluation
Figure 2 shows the number of correctly verified programs out of the benchmarks from the ReachSafetyFloats subcategory, using several solvers and how long it took to complete the verification. There exists no case where ESBMC reports an incorrect result.
Boolector (lingeling, fp2bv) reports the highest number of correct results (), followed by MathSAT using their native floatingpoint API (). This evaluation produced a slightly better result than our previous one of these solvers, where MathSAT was able to solve floatingpoint problems quickly but suffered slowdowns in programs with arrays [32]. MathSAT (fp2bv) presented the fewest number of correct results ().
The results show that Z3 with its native floatingpoint API and Z3 with our fp2bv API produce very similar results: and , respectively; this result is expected since our fp2bv API is heavily based on the bitblasting performed by Z3 when solving floatingpoints. The number of variables and clauses generated in the CNF format, when using Z3 with its native floatingpoint API, is 1%2% smaller than the number generated when using our fp2bv API. The smaller number explains the slightly better results: we assume this is the result of optimizations when Z3 performs the bitblasting internally.
MathSAT results show that their API can solve more benchmarks than MathSAT (fp2bv) within time and memory limits. These benchmarks contain chains of multiplications. They thus require a high computational effort during the propositional satisfiability search. Given that we replace all higherlevel operators by bitlevel circuit equivalents (bitblasting), we end up destroying structural wordlevel information in the problem formulation. Therefore, these results lead us to believe that the MathSAT ACDL algorithm is somehow optimized for FP operations; unfortunately, MathSAT is a free but closed source tool, so we cannot confirm this.
The total verification time for each solver is also illustrated in Figure 2, and again Boolector (lingeling, fp2bv) was the faster solver, thereby solving all programs in seconds. It is followed by Boolector (CaDiCal, fp2bv) with , and Yices (fp2bv) with seconds. Overall, Boolector (lingeling, fp2bv) presented the best results. It correctly verified more programs while also being the faster solver, almost % faster than the second faster solver, which is also Boolector but with a different SAT solver (CaDiCaL).
ESBMC produced no incorrect result in this evaluation, which partially answers RQ1: although we can not formally prove that our algorithm is sound and complete, strong empirical evidence suggests it.
4.3 Comparison to other Software Verifiers
We compare the implementation of our floatingpoint API with other software verifiers: 2LS [42], CBMC [40], CPASeq [8], DIVINE [3], PeSCo [51], Pinaka [17], Symbiotic [16], VeriAbs [19]. Figure 3 illustrates the ReachSafetyFloats results from our best approach against tools that participated in SVCOMP 2020. In particular, we have used the binary and scripts of these tools that are available at the SVCOMP 2020 website under “Participating Teams”.^{1}^{1}1https://svcomp.sosylab.org/2020/systems.php Overall, VeriAbs achieved the highest number of correct results () in s followed by Pinaka () with s, ESBMC () with s, and CBMC () with s.
VeriAbs can verify C programs with floatingpoints via abstraction using SAT solvers. In particular, VeriAbs replaces loops in the original code by abstract loops of small known bounds; it performs value analysis to compute loop invariants and then applies an iterative refinement using kinduction. The VeriAbs tool uses CBMC as its backend to prove properties and find errors, which thus allows VeriAbs to verify C programs with floatingpoints. By contrast, ESBMC uses an iterative technique and verifies the program for each unwind bound until it exhausts the time or memory limits. Intuitively, ESBMC can either find a counterexample with up to loop unwinding or fully unwinds all loops using the same unwinding bound so that it can provide a correct result. ESBMC also relies on SMT solvers to check the satisfiability of the verifications conditions that contain floatingpoint arithmetic.
Pinaka verifies C programs using CBMC, but it relies on an incremental SAT solving coupled with eager state infeasibility checks. Additionally, Pinaka extends CBMC to support both BreadthFirst Search and DepthFirst Search as state exploration strategies along with partial and full incremental modes. Here we have not evaluated the SMT incremental mode implemented in ESBMC since this feature is currently supported for the SMT solver Z3 only. Other SMT solvers do support incremental solving, but ESBMC does not provide support for incremental solving for other SMT solvers yet.
CBMC [21] implements a bitprecise decision procedure for the theory of floatingpoint arithmetic [11]. Both VeriAbs and Pinaka rely on CBMC to verify the underlying C program using that decision procedure. ESBMC originated as a fork of CBMC in 2008 with an improved SMT backend [24] and support for the verification of concurrent programs using an explicit interleaving approach [23]. CBMC uses SAT solvers as their primary engine but offers support for the generation of an SMT formula for an external SMT solver. ESBMC supports SMT solvers directly, through their APIs, along with the option to output SMT formulae. As a result, the main difference between CBMC and ESBMC here relies on the encoding and checking of the verification conditions that contain floatingpoint arithmetic.
These results answer our RQ2
: our floatingpoint API is on par with other stateoftheart tools. VeriAbs and Pinaka implement several heuristics to simplify the check for satisfiability using CBMC, while ESBMC used an incremental approach produced close results. ESBMC was also slightly faster and provided a few more results than CBMC, which lead us to believe that our tool would also greatly benefit VeriAbs and Pinaka if used as backend.
5 Related Work
Several symbolic execution tools try to verify programs with floatingpoint arithmetic by employing different strategies. CoverMe [30]
reformulates floatingpoint constraints as mathematical optimization problems and uses a specially built solver called XSat
[29] to check for satisfiability. Pex [54] uses a similar approach and reasons for floatingpoint constraints as a search problem, and they are solved by using metaheuristics search methods. FPSE [10] models floatingpoint arithmetic by using an interval solver over real arithmetic combined with projection functions.HSE [50] extends KLEE [15] to execute the program and convert floatingpoints into bitvectors symbolically. It then uses SMT solvers to reason about satisfiability. Astr e is a static analysis tool that considers all possible rounding errors when verifying C programs with floatingpoint numbers [9]. It has been applied to verify embedded software in the flight control software of the Airbus.
Bounded model checkers have also been applied to verify programs with floatingpoint arithmetic: CBMC [21] and 2LS [53] convert floatingpoint operations to bitvectors and use SAT solvers to reason about satisfiability. CPBPV [22] uses bounded model checking combined with their FPCS [43] interval solver to generate tests that violate output constraints in the program.
Brain et al. [12] describe an approach called SymFPU for handling the theory of floatingpoint by reducing it to the theory of bitvectors. In particular, the authors describe a library of encodings, which can be included in SMT solvers to add support for the theory of floatingpoint by taking into account floatingpoint reasoning and the fundamentals of circuit design. Brain et al. have integrated SymFPU into the SMT solver CVC4 and evaluate it using a broad set of benchmarks; they conclude that SymFPU+CVC4 can substantially outperforms all previous systems despite using a straightforward bitblasting approach for floatingpoint problems. We could not compare our approach against SymFPU because of bugs in the CVC4 C API; we contacted the author, and we will create bug reports about the issues we identified.
6 Conclusions
We have described our new SMT floatingpoint API, which bitblasts floatingpoint arithmetic and extends the floatingpoint support for SMT solvers that only support bitvector arithmetic. The floatingpoint API was implemented in the SMT backend of ESBMC. Our experimental results show that Boolector (with lingeling as SAT solver) presented the best results: the highest number of correct results within the shortest verification time. We also show that our floatingpoint API implemented in ESBMC is on par with other stateoftheart software verifiers. VeriAbs and Pinaka implement several heuristics to simplify the check for satisfiability using CBMC, while ESBMC with a straightforward incremental approach produced close results.
ESBMC was already extensively used to verify digital systems [1, 5, 6]. However, these projects were limited to fixedpoint arithmetic; supporting floatingpoint encoding will allow researchers to expand their activities in the scientific community. The extensive evaluation performed during the development of these technologies also identified areas to be improved in the solvers and other verification tools. In particular, we submitted patches to Z3 to optimize the generation of unsigned lessthan operations during the bitblast of floatingpoints^{2}^{2}2https://github.com/Z3Prover/z3/pull/1501 (accepted, part of Z3 4.6.1). We also reported bugs to both CBMC^{3}^{3}3https://github.com/diffblue/cbmc/issues/1944 and MathSAT, concerning floatingpoint arithmetic issues, which were later confirmed by the developers.
References
 [1] Abreu, R.B., Gadelha, M.R., Cordeiro, L.C., Filho, E.B.d.L., da Silva Jr., W.S.: Bounded model checking for fixedpoint digital filters. Journal of the Brazilian Computer Society 22(1), 1:1–1:20 (2016)
 [2] ARM: ARM C Language Extensions 2.1 (2016), IHI 0053D
 [3] Baranová, Z., Barnat, J., Kejstová, K., Kučera, T., Lauko, H., Mrázek, J., Ročkai, P., Štill, V.: Model checking of C and C++ with DIVINE 4. In: Automated Technology For Verification And Analysis. LNCS, vol. 10482, pp. 201–207 (2017)
 [4] Barrett, C., Conway, C., Deters, M., Hadarean, L., Jovanović, D., King, T., Reynolds, A., Tinelli, C.: CVC4. In: ComputerAided Verification. LNCS, vol. 6806, pp. 171–177 (2011)
 [5] Bessa, I., Ismail, H., Cordeiro, L.C., Filho, J.E.C.: Verification of fixedpoint digital controllers using direct and delta forms realizations. Design Automation for Embedded Systems 20(2), 95–126 (2016)
 [6] Bessa, I., Ismail, H., Palhares, R., Cordeiro, L.C., Filho, J.E.C.: Formal nonfragile stability verification of digital control systems with uncertainty. IEEE Transactions on Computers 66(3), 545–552 (2017)
 [7] Beyer, D.: Advances in automatic software verification: SVCOMP 2020. In: Tools and Algorithms for the Construction and Analysis of Systems. LNCS, vol. 12079, pp. 347–367 (2020)
 [8] Beyer, D., Keremoglu, M.E.: CPAchecker: A tool for configurable software verification. In: ComputerAided Verification. LNCS, vol. 6806, pp. 184–190 (2011)
 [9] Blanchet, B., Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Miné, A., Monniaux, D., Rival, X.: A static analyzer for large safetycritical software. In: Programming Language Design and Implementation. pp. 196–207 (2004)
 [10] Botella, B., Gotlieb, A., Michel, C.: Symbolic execution of floatingpoint computations: Research articles. Software Testing, Verification & Reliability 16(2), 97–121 (2006)
 [11] Brain, M., D’Silva, V., Griggio, A., Haller, L., Kroening, D.: Deciding floatingpoint logic with abstract conflict driven clause learning. Formal Methods in System Design 45(2), 213–245 (2014)
 [12] Brain, M., Schanda, F., Sun, Y.: Building better bitblasting for floatingpoint problems. In: Tools and Algorithms for the Construction and Analysis of Systems. LNCS, vol. 11427, pp. 79–98 (2019)
 [13] Brain, M., Tinelli, C., Ruemmer, P., Wahl, T.: An automatable formal semantics for IEEE754 floatingpoint arithmetic. In: Symposium On Computer Arithmetic. pp. 160–167 (2015)
 [14] Brummayer, R., Biere, A.: Boolector: An efficient SMT solver for bitvectors and arrays. In: Tools And Algorithms For The Construction And Analysis Of Systems. LNCS, vol. 5505, pp. 174–177 (2009)
 [15] Cadar, C., Dunbar, D., Engler, D.: KLEE: Unassisted and automatic generation of highcoverage tests for complex systems programs. In: Symposium On Operating Systems Design And Implementation. pp. 209–224 (2008)
 [16] Chalupa, M., Vitovská, M., Jonás, M., Slaby, J., Strejcek, J.: Symbiotic 4: Beyond reachability  (competition contribution). In: Tools and Algorithms for the Construction and Analysis of Systems. LNCS, vol. 10206, pp. 385–389 (2017)
 [17] Chaudhary, E., Joshi, S.: Pinaka: Symbolic execution meets incremental solving  (competition contribution). In: Tools and Algorithms for the Construction and Analysis of Systems. LNCS, vol. 11429, pp. 234–238 (2019)
 [18] Chaves, L., Bessa, I., Cordeiro, L.C., Kroening, D., Filho, E.B.d.L.: Verifying digital systems with MATLAB. In: Symposium On Software Testing And Analysis. pp. 388–391 (2017)
 [19] Chimdyalwar, B., Darke, P., Chauhan, A., Shah, P., Kumar, S., Venkatesh, R.: Veriabs: Verification by abstraction (competition contribution). In: Tools and Algorithms for the Construction and Analysis of Systems. LNCS, vol. 10206, pp. 404–408 (2017)
 [20] Cimatti, A., Griggio, A., Schaafsma, B., Sebastiani, R.: The mathSAT5 SMT solver. In: Tools And Algorithms For The Construction And Analysis Of Systems. LNCS, vol. 7795, pp. 93–107 (2013)
 [21] Clarke, E., Kroening, D., Lerda, F.: A tool for checking ANSIC programs. In: Tools And Algorithms For The Construction And Analysis Of Systems. LNCS, vol. 2988, pp. 168–176 (2004)
 [22] Collavizza, H., Michel, C., Ponsini, O., Rueher, M.: Generating test cases inside suspicious intervals for floatingpoint number programs. In: Constraints In Software Testing Verification And Analysis. pp. 7–11 (2014)
 [23] Cordeiro, L.C., Fischer, B.: Verifying multithreaded software using SMTbased contextbounded model checking. In: International Conference on Software Engineering. pp. 331–340 (2011)
 [24] Cordeiro, L.C., Fischer, B., MarquesSilva, J.: SMTbased bounded model checking for embedded ANSIC software. In: Automated Software Engineering. pp. 137–148 (2009)
 [25] De Moura, L., Bjørner, N.: Z3: An efficient SMT solver. In: Tools And Algorithms For The Construction And Analysis Of Systems. LNCS, vol. 4963, pp. 337–340 (2008)
 [26] Dutertre, B.: Yices 2.2. In: ComputerAided Verification. LNCS, vol. 8559, pp. 737–744 (2014)
 [27] Erk k, L.: Bug in floatingpoint conversions. https://github.com/Z3Prover/z3/issues/1564 (2018), [Online; accessed January2020]
 [28] Frantz, G., Simar, R.: Comparing fixed and floatingpoint DSPs. Tech. rep., SPRY061, Texas Instruments (2004)
 [29] Fu, Z., Su, Z.: XSat: A fast floatingpoint satisfiability solver. In: ComputerAided Verification. LNCS, vol. 9780, pp. 187–209 (2016)
 [30] Fu, Z., Su, Z.: Achieving high coverage for floatingpoint code via unconstrained programming. In: Programming Language Design And Implementation. pp. 306–319 (2017)
 [31] Gadelha, M.R., Monteiro, F., Cordeiro, L., Nicole, D.: ESBMC v6.0: Verifying C programs using induction and invariant inference. In: Tools And Algorithms For The Construction And Analysis Of Systems. LNCS, vol. 11429, pp. 209–213 (2019)
 [32] Gadelha, M.Y.R., Cordeiro, L.C., Nicole, D.A.: Encoding floatingpoint numbers using the SMT theory in ESBMC: An empirical evaluation over the SVCOMP benchmarks. In: Simpósio Brasileiro De Métodos Formais. LNCS, vol. 10623, pp. 91–106 (2017)
 [33] Gadelha, M.Y.R., Menezes, R., Monteiro, F.R., Cordeiro, L.C., Nicole, D.A.: ESBMC: scalable and precise test generation based on the floatingpoint theory  (competition contribution). In: Wehrheim, H., Cabot, J. (eds.) International Conference Fundamental Approaches to Software Engineering. LNCS, vol. 12076, pp. 525–529. Springer (2020)
 [34] Gerrity, G.W.: Computer representation of real numbers. IEEE Transactions on Computers C31(8), 709–714 (1982)
 [35] Goldberg, D.: What every computer scientist should know about floating point arithmetic. ACM Computing Surveys 23(1), 5–48 (1991)
 [36] IEEE: IEEE Standard For FloatingPoint Arithmetic (2008), IEEE 7542008
 [37] Ismail, H., Bessa, I., Cordeiro, L.C., Filho, E.B.d.L., Filho, J.E.C.: DSVerifier: A bounded model checking tool for digital systems. In: Symposium On Model Checking Software. LNCS, vol. 9232, pp. 126–131 (2015)
 [38] ISO: C11 Standard (2011), ISO/IEC 9899:2011
 [39] ISO: C++ Standard (2015), ISO/IEC 186613:2015
 [40] Kroening, D., Tautschnig, M.: CBMC  C bounded model checker. In: Tools and Algorithms for the Construction and Analysis of Systems. LNCS, vol. 8413, pp. 389–391 (2014)
 [41] Lions, J.L.: ARIANE 5 flight 501 failure. Tech. rep., Inquiry Board (1996)
 [42] Malík, V., Martiček, Š., Schrammel, P., Srivas, M., Vojnar, T., Wahlang, J.: 2LS: Memory safety and nontermination. In: Tools And Algorithms For The Construction And Analysis Of Systems. LNCS, vol. 10806, pp. 417–421 (2018)
 [43] Michel, C., Rueher, M., Lebbah, Y.: Solving constraints over floatingpoint numbers. In: Principles And Practice Of Constraint Programming. pp. 524–538 (2001)
 [44] Monniaux, D.: The pitfalls of verifying floatingpoint computations. ACM Transactions on Programming Languages and Systems 30(3), 12:1–12:41 (2008)
 [45] Muller, J.M., Brisebarre, N., Dinechin, F., Jeannerod, C.P., Lef vre, V., Melquiond, G., Revol, N., Stehl , D., Torres, S.: Handbook of FloatingPoint Arithmetic. Birkh user Boston, 1st edn. (2010)
 [46] Niemetz, A., Preiner, M., Biere, A.: Boolector 2.0 system description. Journal on Satisfiability, Boolean Modeling and Computation 9, 53–58 (2014)
 [47] Nikolić, Z., Nguyen, H.T., Frantz, G.: Design and implementation of numerical linear algebra algorithms on fixed point DSPs. European Association for Signal Processing 2007(1), 1–22 (2007)
 [48] Noetzli, A.: Failing precondition when multiplying 4bit significand/4bit exponent floats. https://github.com/CVC4/CVC4/issues/2182 (2018), [Online; accessed January2020]
 [49] Patterson, D.A., Hennessy, J.L.: Computer Organization and Design  The Hardware / Software Interface (Revised 4th Edition). Academic Press (2012)
 [50] Quan, M.: Hotspot symbolic execution of floatingpoint programs. In: Symposium On Foundations Of Software Engineering. pp. 1112–1114 (2016)
 [51] Richter, C., Wehrheim, H.: Pesco: Predicting sequential combinations of verifiers  (competition contribution). In: Tools and Algorithms for the Construction and Analysis of Systems. LNCS, vol. 11429, pp. 229–233 (2019)
 [52] Rümmer, P., Wahl, T.: An SMTLIB theory of binary floatingpoint arithmetic. In: SMT Workshop (2010)
 [53] Schrammel, P., Kroening, D., Brain, M., Martins, R., Teige, T., Bienmüller, T.: Incremental bounded model checking for embedded software (extended version). Formal Aspects of Computing 29(5), 911–931 (2017)
 [54] Tillmann, N., De Halleux, J.: Pex: White box test generation for .NET. In: Tests And Proofs. pp. 134–153 (2008)
 [55] Wiffin, E.: 0.30000000000000004.com. https://0.30000000000000004.com/ (2012), [Online; accessed April2020]
Appendix 0.A Support for the Fp logic
Z3  MathSAT  CVC4  ESBMC  
SMT FP operations  v4.7.1  v5.5.1  v1.6prerelease  FP API 
Create floating point sort  
Create rounding mode sort  
Create floating point literal  
Create plus and minus infinity  
Create plus and minus zeroes  
Crete NaN  
Absolute value operator  
Negation operator  
Addition operator  
Subtraction operator  
Multiplication operator  
Division operator  
Fused multiplyadd operator  ^{4}^{4}4In ESBMC, the fused multiplyadd operation uses the bitblasting API when using MathSAT.  
Square root operator  
Remainder operator  
Rounding to Integral operator  
Minimum operator  
Maximum operator  
Less than or equal to operator  
Less than operator  
Greater than or equal to operator  
Greater than operator  
Equality operator  
IsNormal check  
IsSubnormal check  
IsZero check  
IsInfinite check  
IsNaN check  
IsNegative check  
IsPositive check  
Convert to FP from real  
Convert to FP from signed BV  
Convert to FP from unsigned BV  
Convert to FP from another FP  
Convert to unsigned BV from FP  
Convert to signed BV from FP  
Convert to real from FP  
Z3  MathSAT  CVC4  ESBMC  
SMT FP operations  v4.7.1  v5.5.1  v1.6prerelease  FP API 
Convert to IEEE BV from FP^{5}^{5}5Not part of the SMT FP logic.  
Convert to floatingpoint from IEEE BV^{LABEL:appendix:refieee} 
Comments
There are no comments yet.