# Automatic generation and verification of test-stable floating-point code

Test instability in a floating-point program occurs when the control flow of the program diverges from its ideal execution assuming real arithmetic. This phenomenon is caused by the presence of round-off errors that affect the evaluation of arithmetic expressions occurring in conditional statements. Unstable tests may lead to significant errors in safety-critical applications that depend on numerical computations. Writing programs that take into consideration test instability is a difficult task that requires expertise on finite precision computations and rounding errors. This paper presents a toolchain to automatically generate and verify a provably correct test-stable floating-point program from a functional specification in real arithmetic. The input is a real-valued program written in the Prototype Verification System (PVS) specification language and the output is a transformed floating-point C program annotated with ANSI/ISO C Specification Language (ACSL) contracts. These contracts relate the floating-point program to its functional specification in real arithmetic. The transformed program detects if unstable tests may occur and, in these cases, issues a warning and terminate. An approach that combines the Frama-C analyzer, the PRECiSA round-off error estimator, and PVS is proposed to automatically verify that the generated program code is correct in the sense that, if the program terminates without a warning, it follows the same computational path as its real-valued functional specification.

## Authors

• 3 publications
• 1 publication
• 3 publications
• ### Eliminating Unstable Tests in Floating-Point Programs

Round-off errors arising from the difference between real numbers and th...
08/13/2018 ∙ by Laura Titolo, et al. ∙ 0

• ### Correct Approximation of IEEE 754 Floating-Point Arithmetic for Program Verification

Verification of programs using floating-point arithmetic is challenging ...
03/11/2019 ∙ by Roberto Bagnara, et al. ∙ 0

• ### A Verified Certificate Checker for Floating-Point Error Bounds

Being able to soundly estimate roundoff errors in floating-point computa...
07/07/2017 ∙ by Heiko Becker, et al. ∙ 0

• ### ENBB Processor: Towards the ExaScale Numerical Brain Box [Position Paper]

ExaScale systems will be a key driver for simulations that are essential...
02/18/2019 ∙ by Elisardo Antelo, et al. ∙ 0

• ### Computable decision making on the reals and other spaces via partiality and nondeterminism

Though many safety-critical software systems use floating point to repre...
05/01/2018 ∙ by Benjamin Sherman, et al. ∙ 0

• ### Benchmarking Software Model Checkers on Automotive Code

This paper reports on our experiences with verifying automotive C code b...
03/26/2020 ∙ by Lukas Westhofen, et al. ∙ 0

• ### Multi-level analysis of compiler induced variability and performance tradeoffs

Floating-point arithmetic is the computational foundation of numerical s...
11/14/2018 ∙ by Michael Bentley, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The development of software that depends on floating-point computations is particularly challenging due to the presence of round-off errors in computer arithmetic. Round-off errors originate from the difference between real numbers and their finite precision representation. Since round-off errors accumulate during numerical computations, they may significantly affect the evaluation of both arithmetic and Boolean expressions. In particular, unstable tests occur when the guard of a conditional statement contains a floating-point expression whose round-off error makes the actual Boolean value of the guard differ from the value that would be obtained assuming real arithmetic. The presence of unstable tests amplifies, even more, the divergence between the output of a floating-point program and its ideal evaluation in real arithmetic. This divergence may lead to catastrophic consequences in safety-critical applications.

Writing software that takes into consideration how unstable tests affect the execution flow of floating-point programs requires a deep comprehension of floating-point arithmetic. Furthermore, this process can be tedious and error-prone for programs with function calls and complex mathematical expressions. This paper presents a

fully automatic toolchain to generate and verify test-stable floating-point C code from a functional specification in real arithmetic. This toolchain consists of:

• a formally-verified program transformation that generates and instruments a floating-point program to detect unstable tests,

• PRECiSA [MoscatoTDM17, TitoloFMM18], a static analyzer that computes sound estimations of the round-off error that may occur in a floating-point program,

• Frama-C [KirchnerKPSY15], a collaborative tool suite for the analysis of C code, and

• the Prototype Verification System (PVS) [OwreRS92], an interactive theorem prover for higher-order logic.

The input of the toolchain is a PVS specification of a numerical algorithm in real arithmetic, the desired floating-point format (single or double precision), and, optionally, initial ranges for the input variables. This program specification is straightforwardly implemented using floating-point arithmetic. This is done by replacing each real-valued operator by its floating-point counterpart. Furthermore, each real-number constant and variable is rounded to its closest floating-point in the chosen format and rounding modality. Then, the proposed program transformation is applied. Numerically unstable tests are replaced with more restrictive ones that preserve the control flow of the real-valued original specification. These new tests take into consideration the round-off error that may occur when the expressions of the original program are evaluated in floating-point arithmetic. In addition, the transformation instruments the program to emit a warning when the floating-point flow may diverge with respect to the original real number specification.

The transformed program is expressed in C syntax along with ACSL Specification Language annotations stating the relationship between the floating-point C implementation and its functional specification in real arithmetic. To this end, the round-off errors that occur in conditional tests and in the overall computation of the program are soundly estimated by the static analyzer PRECiSA. The correctness property of the C program is specified as an ACSL post-condition stating that if the program terminates without a warning, it follows the same computational path as the real-valued specification, i.e., all unstable tests are detected.

An extension to the Frama-C/WP plug-in (Weakest Precondition calculus) is implemented to automatically generate verification conditions in the PVS language from the annotated C code. These verification conditions encode the correctness of the transformed program and are automatically discharged by proof strategies implemented in PVS. Therefore, no expertise in theorem proving nor knowledge on floating-point arithmetic is required from the user to verify the correctness of the generated C program.

The contributions of this work are summarized below.

• A new and enhanced version of the program trasformation initially defined in [TitoloMFM18] that adds support for function calls, bounded recursion (for-loops), and symbolic parameters.

• A PVS formalization of the correctness of the proposed transformation.

• An implementation of the proposed transformation integrated within the static analyzer PRECiSA.

• An extension of the Frama-C/WP plug-in to generate proof obligations in the PVS specification language.

• Proof strategies in PVS to automatically discharge the verification conditions generated by the Frama-C/WP plug-in.

The remainder of the paper is organized as follows. sec:fp_err provides technical background on floating-point numbers, round-off errors, and unstable tests. A denotational semantics that collects information about the differences between floating-point and real computational flows is presented in sec:sem. The proposed program transformation to detect test instability is described in sec:transformation. sec:cgen illustrates the use of the proposed toolchain to automatically generate and verify a probably correct floating-point C program from a PVS real-valued specification. sec:related discusses related work and sec:concl concludes the paper.

## 2 Floating-Point Numbers, Round-Off Errors, and Unstable Tests

Floating-point numbers [IEEE754floating] are finite precision representations of real numbers widely used in computer programs. In this work, a floating-point number, or a float, is formalized as a pair of integers , where is called the significand and the exponent of the float [Daumas2001, BoldoMunoz06]. A floating-point format is defined as a pair of integers , where is called the precision and is called the minimal exponent. Given a base , a pair represents a floating-point number in the format if and only if it holds that and . For instance, IEEE single and double precision floating-point numbers are specified by the formats and , respectively. Henceforth, will denote the set of floating-point numbers and the expression will denote a floating-point number in . A conversion function is defined to refer to the real number represented by a given float, i.e., , where is the base of the representation. The expression denotes the floating-point number in format closest to , i.e., the rounding of . The format will be omitted when clear from the context or irrelevant.

###### Definition 1 (Round-off error)

Let be a floating-point number that represents a real number , the difference is called the round-off error (or rounding error) of with respect to .

The unit in the last place (ulp) is a measure of the precision of a floating-point number as a representation of a real number. Given , represents the difference between two closest consecutive floating-point numbers and such that and . The can be used to bound the round-off error of a real number with respect to its floating-point representation in the following way:

 |R(F(r))−r|≤12ulp(r). (1)

Round-off errors accumulate through the computation of mathematical operators. Therefore, an initial error that seems negligible may become significantly larger when combined and propagated inside nested mathematical expressions. The accumulated round-off error is the difference between a floating-point expression and its real-valued counterpart and it depends on  (a) the error introduced by the application of versus and  (b) the propagation of the errors carried out by the arguments, i.e., the difference between and , for , in the application. Henceforth, it is assumed that for any floating-point operator of interest , there exists an error bound function such that, if holds for all , then:

 ∣∣R(˜⊙(~vi)ni=1)−⊙(ri)ni=1∣∣≤ϵ˜⊙(ri,ei)ni=1. (2)

For example, in the case of the sum, the accumulated round-ff error is defined as . More examples of error bound functions can be found in [MoscatoTDM17, TitoloFMM18].

The evaluation of Boolean expressions is also affected by rounding errors. When a Boolean expression evaluates differently in real and floating-point arithmetic, is said to be unstable. The presence of unstable tests amplifies the effect of round-off errors in numerical programs since the computational flow of a floating-point program may significantly diverge from the ideal execution of its representation in real arithmetic. In fact, the output of a floating-point program is not only directly influenced by rounding errors accumulating in the mathematical expressions, but also by the error of taking the incorrect branch in the case of unstable tests.

Given a set of pre-defined floating-point operations, the corresponding set of operations over real numbers, a set of function symbols, a finite set of variables representing real values, and a finite set of variables representing floating-point values, where and are disjoint, the sets and of arithmetic expressions over real numbers and over floating-point numbers, respectively, are defined by the following grammars.

 A::=d∣x∣⊙(A,…,A)∣f(A,…,A), ˜A::=˜d∣~x∣˜⊙(˜A,…,˜A)∣~f(˜A,…,˜A),

where , , , , , , , , and . It is assumed that there is a function that associates to each floating-point variable a variable representing the real value of . The function converts an arithmetic expression on floating-point numbers to an arithmetic expression on real numbers. It is defined by replacing each floating-point operation with the corresponding one on real numbers and by applying and to floating-point values and variables, respectively. Conversely, the function converts a real expression into a floating-point one by applying the rounding to constants and variables and by replacing each real-valued operator with the corresponding floating-point one. By abuse of notation, floating-point expressions are interpreted as their real number evaluation when occurring inside a real-valued expression.

Boolean expressions over the reals and over the floats are defined by the following grammar,

 B ::=true∣false∣B∧B∣B∨B∣¬B∣A

where and . The conjunction , disjunction , negation , , and have the usual classical logic meaning. The functions and convert a Boolean expression on floating-point numbers to a Boolean expression on real numbers and vice-versa. They are defined, respectively, as the natural extension of and to Boolean expressions. Given a variable assignment , denotes the evaluation of the real Boolean expression . Similarly, given and , denotes the evaluation of the floating-point Boolean expression .

###### Definition 2 (Unstable Test)

A test is unstable if there exist two assignments and such that for all , and . Otherwise, the conditional expression is said to be stable.

In other words, a test is unstable when there exists an assignment from the free variables in to such that evaluates to a different Boolean value with respect to its real-valued counterpart . The evaluation of a conditional statement is said to follow an unstable path when is unstable and it is evaluated differently in real and floating-point arithmetic. When the flows coincide, the evaluation is said to follow a stable path.

## 3 A Denotational Semantics for Floating-Point Programs

This section illustrates a denotational semantics to reason about round-off errors and test instability in floating-point programs. This semantics collects information about both real and floating-point path conditions and soundly estimates the difference between the ideal real-valued result and the actual floating-point one. This information is collected symbolically. Therefore, the semantics supports symbolic parameters for which the numerical inputs are unknown. This semantics is an extension of the one presented in [TitoloFMM18] and it has been implemented in the static analyzer PRECiSA, which computes provably correct over-estimations of the round-off errors occurring in a floating-point program.

The language considered in this work is a simple functional language with binary and -ary conditionals, let-in expressions, arithmetic expressions, function calls, for-loops, and a warning exceptional statement . The syntax of floating-point program expressions in is given by the following grammar.

 ˜S::= ˜d∣~x∣˜⊙(˜A,…,˜A) ∣~f(˜A,…,˜A) ∣ let~x=˜Ain˜S (3)

where , , , , , , , , and . The notation denotes a list of conditional branches.

Bounded recursion is added to the language as syntactic sugar using the construct. The expression emulates a for loop where is the control variable that ranges from to , is the variable where the result is accumulated with initial value , and is the body of the loop. For instance, represents the value , where is the recursive function .

A floating-point program is defined as a set of function declarations of the form , where are pairwise distinct variables in and all free variables appearing in are in . The natural number is called the arity of . Henceforth, it is assumed that programs are well-formed in the sense that, in a program , for every function call that occurs in the body of the declaration of a function , a unique function of arity is defined in before . Hence, the only recursion allowed is the one provided by the for-loop construct. The set of floating-point programs is denoted as .

The proposed semantics collects for each combination of real and floating-point program paths: the real and floating-point path conditions, and three symbolic expressions representing: (1) the value of the output assuming the use of real arithmetic, (2) the value of the output assuming floating-point arithmetic, and (3) an over-approximation of the maximum round-off error occurring in the computation. In addition, a flag is provided indicating if the element refers to either a stable or an unstable path. Since the semantics collects information about real and floating-point execution paths, it is possible to consider the error of taking the incorrect branch compared to the ideal execution using exact real arithmetic. This enables a sound treatment of unstable tests. The previous information is stored in a conditional error bound.

###### Definition 3 (Conditional error bound)

A conditional error bound is an expression of the form , where , , , , and ,.

Intuitively, indicates that if both conditions and are satisfied, the output of the ideal real-valued implementation of the program is , the output of the floating-point execution is , and the round-off error is at most , i.e., . The sub-index is used to mark by construction whether a conditional error bound corresponds to an unstable path, when , or to a stable path, when .

Let be the set of all conditional error bounds, and be the domain formed by sets of conditional error bounds. An environment is defined as a function mapping a variable to a set of conditional error bounds, i.e., . The empty environment is denoted as and maps every variable to the empty set . Let be the set of all possible function calls. An interpretation is a function

modulo variance

111Two functions are variants if for each there exists a renaming such that .. The set of all interpretations is denoted as . The empty interpretation is denoted as and maps everything to .

Given and , the semantics of program expressions is defined in fig:sem as a function that returns the set of conditional error bounds representing the possible real and floating-point results, their difference, and their corresponding path conditions. Conditional error bounds of the form whose conditions’ conjunction is unsatisfiable, i.e., , are considered spurious and they are dropped from the semantics since they do not correspond to an actual trace of the program. In the following, the non-trivial cases are described.

Variable.

The semantics of a variable consists of two cases. If belongs to the environment, then the variable has been previously bound to a program expression through a let-in expression. In this case, the semantics of is exactly the semantics of . If does not belong to the environment, then is a parameter of the function. Here, a new conditional error bound is added with two placeholder and , representing the real value and the error of , respectively.

Mathematical Operator.

The semantics of a floating-point operation is computed by composing the semantics of its operands. The real and floating-point values are obtained by applying the corresponding arithmetic operation to the values of the operands. The effect of the warning construct is propagated in the arithmetic expressions. Thus, it is assumed that for all floating-point and real operator , when for some . The new conditions are obtained as the combination of the conditions of the operands. The new conditional error bounds for are marked unstable if any of the conditional error bounds in the semantics of is unstable. is defined as if it exists such that , otherwise it is defined as .

Let-in expression.

The semantics of the expression updates the current environment by associating with variable the semantics of expression .

Binary conditional.

The semantics of the conditional uses an auxiliary operator .

###### Definition 4 (Condition propagation operator)

Let and , if , otherwise it is undefined. The definition of naturally extends to sets of conditional error bounds, i.e., let , .

The semantics of and are enriched with information about the fact that real and floating-point control flows match, i.e., both and have the same value. In addition, new conditional error bounds are built to model the unstable cases when real and floating-point control flows do not coincide and, therefore, real and floating-point computations diverge. For example, if is satisfied but is not, the branch is taken in the floating-point computation, but the would have been taken in the real one. In this case, the real condition and its corresponding output are taken from the semantics of , while the floating-point condition and its corresponding output are taken from the semantics of . The condition is propagated in order to model that holds but does not. The conditional error bounds representing this case are marked with .

N-ary conditional.

The semantics of an n-ary conditional is composed of stable and unstable cases. The stable cases are built from the semantics of all the program sub-expressions by enriching them with information stating that the correspondent guard and its real counterpart hold and all the previous guards and their real counterparts do not hold. All the unstable combinations are built by combining the real parts of the semantics of a program expression and the floating-point contributions of a different program expression . In addition, the operator is used to propagate the information that the real guard of and the floating-point guard of hold, while the guards of the previous branches do not hold.

Function call.

The semantics of a function call combines the conditions coming from the interpretation of the function and the ones coming from the semantics of the parameters. Variables representing real values, floating-point values, and errors of formal parameters are replaced with the expressions coming from the semantics of the actual parameters. The notation denotes the substitution of for in the expression .

The semantics of a program is a function defined as the least fixed point of the immediate consequence operator , i.e., given , , which is defined as follows for each function symbol defined in .

 P[[˜P]]I(~f(~x1…~xn))\coloneqE[[˜S]]I⊥Env if ~f(~x1…~xn)=˜S∈˜P. (4)

The least fixed point of is guaranteed to exist from the Knaster-Tarski Fixpoint theorem [Tarski55] since is monotonic over . This least fixed-point converges in a finite number of steps for the programs with bounded recursion considered in this paper.

###### Example 1

Consider the function that is part of DAIDALUS222DAIDALUS is available from https://shemesh.larc.nasa.gov/fm/DAIDALUS/. (Detect and Avoid Alerting Logic for Unmanned Systems), a NASA library that implements detect-and-avoid algorithms for unmanned aircraft systems. This function computes the time to co-altitude of two vertically converging aircraft given their relative vertical position and relative vertical velocity . When the aircraft air vertically diverging, the function returns 0.

 ˜tcoa(~s,~v)=if~s~∗~v<0then−(~s~/~v)else0

The semantics of consists of four conditional error bounds:

 { ⟨χr(~s)∗χr(~v)<0,~s~∗~v<0⟩s↠(−(χr(~s)/χr(~v)),−(~s~/~v), ϵ~/(χr(~s),χe(~s),χr(~v),χe(~v))), ⟨χr(~s)∗χr(v)≥0,~s~∗~v≥0⟩s↠(0,0,0), ⟨χr(~s)∗χr(~v)≥0,~s~∗~v<0⟩u↠(0,−(~s~/~v),ϵ~/(χr(~s),χe(~s),χr(~v),χe(~v)) +|0−(χr(~s)/χr(~v))|), ⟨χr(~s)∗χr(~v)<0,~s~∗~v≥0⟩u↠(−(χr(~s)/χr(~v)),0,|0−(χr(~s)/χr(~v))|)}.

The first two elements correspond to the cases where real and floating-point computational flows coincide. In these cases, the round-off error is bounded by when the branch is taken, otherwise, it is 0 since the integer is exactly representable as a float. The other two elements model the unstable paths. In these cases, the error is computed as the difference between the output of the two branches plus the accumulated round-off error of the floating-point result.

A real-valued program (or, simply, a real program) has the same structure of a floating-point program where floating-point expressions are replaced with real number ones. A real-valued program does not contain any statements. The set of real-valued programs is denoted as . The function converts a real program into a floating-point one by applying, respectively, and to Boolean and arithmetic expressions occurring in the function declarations in . Conversely, returns the real-number counterpart of a floating-point program. For every floating-point program , it holds that .

The presented semantics correctly models the difference between the floating-point program and its real number counterpart as stated in the following theorem.

###### Theorem 3.1

Let be a floating-point program. For every function symbol defined in , let be its real-valued counterpart defined in such that for all , . It holds that

where . The expression is called the overall error of the function .

###### Proof (Proof Sketch.)

Given , for each declaration occurring in , it exists a declaration in . Thus, holds if and only if holds. The proof proceeds by structural induction on the structure of the program expression . The main cases are the arithmetic expression and the conditional.

Given an arithmetic expression , from Formula (2), it follows that the error expression associated to is a correct over-approximation of the round-off error, therefore , where .

Let , , and assume . By structural induction and by def:prop, it follows that

In addition, given and , the error of taking an unstable path is defined as the difference between the real and the floating-point results, which is bounded by the following value