Programs using floating-point numbers are notoriously difficult to reason about [Mon08]. Many factors complicate the task:
compilers may transform the code in a way that does not preserve the semantics of floating-point computations;
floating-point formats are an implementation-defined aspect of most programming languages;
there are different, incompatible implementations of the operations for the same floating-point format;
mathematical libraries often come with little or no guarantee about what is actually computed;
programmers have a hard time predicting and avoiding phenomena caused by the limited range and precision of floating-point numbers (overflow, absorption, cancellation, underflow, …); moreover, devices that modern floating-point formats possess in order to support better handling of such phenomena (infinities, signed zeroes, denormal numbers, non-numeric objects a.k.a. NaNs) come with their share of issues;
rounding is a source of confusion in itself; moreover, there are several possible rounding modes and programs can change the rounding mode to any time.
As a result of these difficulties, the verification of floating-point programs in industry relies, almost exclusively, on informal methods, mainly testing, or on the evaluation of the numerical accuracy of computations, which only allows to determine conservative (but often too loose) bounds on the propagated error [DGP09].
The satisfactory formal treatment of programs engaging into floating-point computations requires an equally satisfactory solution to the difficulties summarized in the above enumeration. Progress has been made, but more remains to be done. Let us review each point:
Some compilers provide options to refrain from rearranging floating-point computations. When these are not available or cannot be used, the only possibilities is to verify the generated machine code or some intermediate code whose semantics is guaranteed to be preserver by the compiler back-end.
Even though the used floating-point formats are implementation-defined aspects of, say, C and C++111This is not relevant if one analyzes machine or sufficiently low-level intermediate code. the wide adoption of the IEEE 754 standard for binary floating-point arithmetic [IEE08] has improved things considerably.
The IEEE 754 standard does provide some strong guarantees, e.g., that the results of individual additions, subtractions, multiplications, divisions and square roots are correctly rounded, that is, it is as if the results were computed in the reals and then rounded as per the rounding mode in effect. But it does not provide guarantees on the results of other operations and on other aspects, such as, e.g., when the underflow exception is signaled [CKVDV02].222The indeterminacy described in [CKVDV02] is present also in the 2008 edition of IEEE 754 [IEE08].
A pragmatic, yet effective approach to support formal reasoning on commonly used implementation of mathematical functions has been recently proposed in [BCGB16, BCGB17]. The proposed techniques exploit the fact that the floating-point implementation of mathematical functions preserve, not completely but to a great extent, the piecewise monotonicity nature of the approximated functions over the reals.
The contribution of the present paper is in this area: by defining and formally proving the correctness of constraint propagation algorithms for IEEE 754 arithmetic constraints, we enable the use of formal methods for a broad range of programs. Such methods, i.e., abstract interpretation and symbolic model checking, allow proving that a number of generally unwanted phenomena (e.g., generation of NaNs and infinities, absorption, cancellation, instability…) do not happen or, in case they do happen, allow the generation of a test vector to reproduce the issue.
Handling of all IEEE 754 rounding modes, and being resilient to uncertainty about the rounding mode in effect, is another original contribution of this paper.
While the round-to-nearest rounding mode is, by far, the most frequently used one, it must be taken into account that:
the possibility of programmatically changing the rounding mode is granted by IEEE 754 and is offered by most of its implementations (e.g., in the C programming language, via the
setting the rounding mode to something different from round-to-nearest can be done by third parties in a way that was not anticipated by programmers: this may cause unwanted non-determinism in videogames [Fie10] and there is nothing preventing the abuse of this feature for more malicious ends, denial-of-service being only the least dangerous in the range of possibilities. Leaving malware aside, there are graphic and printer drivers and sound libraries that are known to change the rounding mode and may fail to set it back [Wat08].
As a possible way of tackling the difficulties described until now, and enabling sound formal verification of floating-point computations, this paper introduces new algorithms for the propagation of arithmetic constraints over floating-point numbers. These algorithms are called filtering algorithms as their purpose is to prune the domains of possible variable values by filtering out those values that cannot be part of the solution of a system of constraints. Algorithms of this kind must be employed in constraint solvers that are required in several different areas, such as automated test-case generation, exception detection or the detection of subnormal computations. In this paper we propose fully detailed, provably correct filtering algorithms for floating-point constraints, which handle all values, including symbolic values (NaNs, infinities and signed zeros), and rounding modes defined by IEEE 754. Note that filtering techniques used in solvers over the reals do not preserve all solutions of constraints over floating-point numbers [MRL01, Mic02], and therefore they cannot be used to prune floating-point variables domains reliably. This leads to the need of filtering algorithms such as those we hereby introduce.
Before defining our filtering algorithms in a detailed and formal way, we provide a more comprehensive context on the propagation of floating-point constraints from a practical point of view (Sections 1.1 and 1.2), and justify their use in formal program analysis and verification (Section 1.3).
1.1 From Programs to Floating-Point Constraints
Independently from the application, program analysis starts with parsing, the generation of an abstract syntax tree and the generation of various kinds of intermediate program representations. An important intermediate representation is called three-address code (TAC). In this representation, complex arithmetic expressions and assignments are decomposed into sequences of assignment instructions of the form
A further refinement consists in the computation of the static single assignment form (SSA) whereby, labeling each assigned variable with a fresh name, assignments can be considered as if they were equality constraints. For example, the TAC form of the floating-point assignment is , which in SSA form becomes . These, in turn, can be regarded as the conjunction of the constraints and , where by and we denote the multiplication and addition operations on floating-point numbers, respectively. The Boolean comparison expressions that appear in the guards of if statements can be translated into constraints similarly. This way, a C/C++ program translated into an SSA-based intermediate representation can be represented as a set of constraints on its variables. Constraints can be added or removed from this set in order to obtain a constraint system that describes a particular behavior of the program (e.g., the execution of a certain instruction, the occurrence of an overflow in a computation, etc.). Once such a constraint system has been solved, the variable domains only contain values that cause the desired behavior. If one of the domains is empty, then that behavior can be ruled out.
1.2 Constraint Propagation
Once constraints have been generated, they are amenable to constraint propagation: under this name goes any technique that consists in considering a subset of the constraints at a time, explicitly removing elements from the set of values that are candidate to be assigned to the constrained variables. The values that can be removed are those that cannot possibly participate in a solution for the selected set of constraints. For instance, if a set of floating-point constraints contains the constraint , then any value outside the set can be removed from further consideration. The degree up to which this removal can actually take place depends on the data-structure used to record the possible values for , intervals and multi-intervals being typical choices for numerical constraints. For the example above, if intervals are used, the removal can only be partial (negative floating-point numbers are removed from the domain of ). With multi-intervals more precision is possible, but any approach based on multi-intervals must take measures to avoid combinatorial explosion.
In this paper, we only focus on interval-based constraint propagation: the algorithms we present for intervals can be rather easily generalized to the case of multi-intervals. We make the further assumption that the floating-point formats available to the analyzed program are also available to the analyzer: this is indeed quite common due to the wide adoption of the IEEE 754 formats.
Interval-based floating-point constraint propagation consists in iteratively narrowing the intervals associated to each variable: this process is called filtering. A projection is a function that, given a constraint and the intervals associated to two of the variables occurring in it, computes a possibly refined interval for the third variable (the projection is said to be over the third variable). Taking as an example, the projection over is called direct projection (it goes in the same sense of the TAC assignment it comes from), while the projections over and are called indirect projections.
1.3 Applications of Constraint Propagation to Program Analysis
When integrated in a complete program verification framework, the constraint propagation techniques presented in this paper enable activities such as abstract interpretation, automatic test-input generation and symbolic model checking. In particular, symbolic model checking consists in exhaustively proving that a certain property, called specification, is satisfied by the system in exam, which in this case is a computer program. A model checker can either prove that the given specification is satisfied, or provide a useful counterexample whenever it is not.
For programs involving floating-point computations, some of the most significant properties that can be checked consist in ruling out certain undesired exceptional behaviors such as overflows, underflows and the generation of NaNs, and numerical pitfalls such as absorption and cancellation. In more detail, we call a numeric-to-NaN transition a floating-point arithmetic computation that returns a NaN despite its operands being non-NaN. We call a finite-to-infinite transition the event of a floating-point operation returning an infinity when executed on finite operands, which occurs if the operation overflows. An underflow occurs when the output of a computation is too small to be represented in the machine floating-point format without a significant loss in accuracy. Specifically, we divide underflows into three categories, depending on their severity:
- Gradual underflow:
an operation performed on normalized numbers results into a subnormal number. In other words, a subnormal has been generated out of normalized numbers: enabling gradual underflow is indeed the very reason for the existence of subnormals in IEEE 754. However, as subnormals come with their share of problems, generating them is better avoided.
- Hard underflow:
an operation performed on normalized numbers results into a zero, whereas the result computed on the reals is nonzero. This is called hard because the relative error is 100%, gradual overflow does not help (the output is zero, not a subnormal), and, as neither input is a subnormal, this operation may constitute a problem per se.
- Soft underflow:
an operation with at least one subnormal operand results into a zero, whereas the result computed on the reals is nonzero. The relative error is still 100% but, as one of the operands is a subnormal, this operation may not be the root cause of the problem.
Absorption occurs when the result of an arithmetic operation is equal to one of the operands, even if the other one is not the neutral element of that operation. For example, absorption occurs when summing a number with another one that has a relatively very small exponent. If the precision of the floating-point format in use is not enough to represent them, the additional digits that would appear in the mantissa of the result are rounded out.
(Absorption.) Let with , let be any IEEE 754 floating-point operator, and let . Then gives rise to absorption if
and either and , or and ;
and either and , or and ;
and either and , or and ;
, and .
In this section, we show how symbolic model checking can be used to either rule out or pinpoint the presence of these run-time anomalies in a software program by means of a simple but meaningful practical example. Floating-point constraint propagation has been fully implemented with the techniques presented in this paper in the commercial tool ECLAIR,333https://bugseng.com/eclair, last accessed on January 23rd, 2019. developed and commercialized by BUGSENG. ECLAIR is a generic platform for the formal verification of C/C++ and Java source code, as well as Java bytecode. The filtering algorithms described in the present paper are used in the C/C++ modules of ECLAIR that are responsible for semantic analysis based on abstract interpretation [CC77], automatic generation of test-cases, and symbolic model checking. The latter two are based on constraint satisfaction problems [GBR98, GBR00], whose solution is based on multi-interval refinement and is driven by labeling and backtracking search. Constraints arising from the use of mathematical functions provided by C/C++ standard libraries are also supported. Unfortunately, most implementations of such libraries are not correctly rounded, which makes the realization of filtering algorithms for them rather challenging. In ECLAIR, propagation for such constraints is performed by exploiting the piecewise monotonicity properties of those functions, which are partially retained by all implementations we know of [BCGB16, BCGB17].
To demonstrate the capabilities of the techniques presented in this paper, we applied them to the C code excerpt of Figure 1.3. It is part of the implementation of the Bessel functions in the GNU Scientific Library,444https://www.gnu.org/software/gsl/, last accessed on January 25th, 2019. a widely adopted library for numerical computations. In particular, it computes the scaled regular modified cylindrical Bessel function of first order, , where is a purely imaginary argument. The function stores the computed result in the val field of the data structure result
, together with an estimate of the absolute error (result->err). Additionally, the function returns an int status code, which reports to the user the occurrence of certain exceptional conditions, such as overflows and underflows. In particular, this function only reports an underflow when the argument is smaller than a constant. We analyzed this program fragment with ECLAIR’s symbolic model checking engine, setting it up to detect overflow (finite-to-infinite transitions), underflow and absorption events, and NaN generation (numeric-to-NaN transitions). Thus, we found out the underflow guarded against by the if statement of line 1.3 is by far not the only numerical anomaly affecting this function. In total, we found a numeric-to-NaN transition, two possible finite-to-infinite transitions, two hard underflows, 5 gradual underflows and 6 soft underflows. The code locations in which they occur are all reported in Figure 1.3.
For each one of these events, ECLAIR yields an input value causing it. Also, it optionally produces an instrumented version of the original code, and runs it on every input it reports, checking whether it actually triggers the expected behavior or not. Hence, the produced input values are validated automatically. For example, the hard underflow of line 1.3 is triggered by the input . If the function is executed with , the multiplication of line 1.3 yields a negative infinity. Since , we know would also cause the overflow. The same value of causes an overflow in line 1.3 as well. The division in the same line produces a NaN if the function is executed with .
Whether the events we found could cause significant issues depends on the context in which they occur. For example, even in the event of absorption, the output of the overall computation could be correctly rounded. Whether or not this is acceptable must be assessed depending on the application. Indeed, the capability of ECLAIR of detecting absorption can be a valuable tool to decide if a floating-point format with a higher precision is needed. Nevertheless, some of such events are certainly problematic. The structure of the function suggests that no underflow should occur if control flow reaches past the if guard of line 1.3. On the contrary, several underflows may occur afterwards, some of which are even hard. Moreover, the generation of infinities or NaNs should certainly either be avoided, or signaled by returning a suitable error code (and not GSL_SUCCESS). The input values reported by ECLAIR could be helpful for the developer in fixing the problems detected in the function of Figure 1.3. Furthermore, the algorithms presented in this paper are provably correct. For this reason, it is possible to state that this code excerpt presents no other issues besides those we reported above. Notice, however, that due to the way the standard C mathematical library functions are treated, the results above only hold with respect to the implementation of the function in use. In particular, the machine we used for the analysis is equipped with the x86_64 version of EGLIBC 2.19, running on Ubuntu 14.04.1.
1.4 Related Work
In [Mic02] C. Michel proposed a framework for filtering constraints over floating-point numbers. He considered monotonic functions over one argument and devised exact direct and correct indirect projections for each possible rounding mode. Extending this approach to binary arithmetic operators is not an easy task. In [BGM06], the authors extended the approach of [Mic02] by proposing filtering algorithms for the four basic binary arithmetical operators when only the round-to-nearest tails-to-even rounding mode is available. They also provided tables for indirect function projections when zeros and infinities are considered with this rounding mode. In our approach, we generalize the initial work of [BGM06] by providing extended interval reasoning. The algorithms and tables we present in this paper consider all rounding modes, and contain all details and special cases, allowing the interested reader to write an implementation of interval-based filtering code.
Several analyses for automatic detection of floating point exceptions were proposed in the literature. In [BVLS13] the authors propose a symbolic execution system for detecting floating-point exceptions. It is based on the following steps: each numerical program is transformed to directly check each exception-triggering condition, the transformed program is symbolically-executed in real arithmetic to find a (real) candidate input that triggers the exception, the real candidate is converted into a floating-point number, which is finally tested against the original program. Since approximating floating-point arithmetic with real arithmetic does not preserve the feasibility of execution paths and outputs in any sense, they cannot guarantee that once a real candidate has been selected, a floating-point number raising the same exception can be found. Even more importantly, even if the transformed program over the reals is exception-free, the original program using floating-point arithmetic may not be actually exception-free. Symbolic execution is also at the base of the analysis proposed in [WLZ17], that aims at detecting floating point exceptions by combining it with value range analysis. The value range of each variable is updated with the appropriate path conditions by leveraging interval constraint-propagation techniques. Since the projections used in that paper have not been proved to yield correct approximations, it can be the case that the obtained value ranges do not contain all possible floating-point values for each variable. Indeed, valid values may be removed from value ranges, which leads to false negatives. In Section 5, the tool for floating-point exception detection presented in [WLZ17] is compared with the same analysis based on our propagation algorithms. As expected, no false positives were detected among the results of our analysis. Recently, [GWBC18] presented a preliminary investigation on inverse projections for addition under the round-to-nearest rounding mode. It proposes algorithms for devising lower bounds for inverse projections of addition that combine classical filtering based on the properties of addition with filtering based on the properties of subtraction constraints on floating-points as introduced by Marre and Michel in [MM10]. In this way, they are able to prove the optimality of the lower bounds computed by their algorithms. It is worth noting that the filtering algorithms on intervals presented in [MM10] have been corrected for addition/subtraction constraints and extended to multiplication and division under the round-to-nearest rounding mode by some of these authors (see [BCGG13, BCGG16]). In this paper we discuss the cases in which the filtering algorithms in [BCGG13, BCGG16, MM10] should be used in combination or in alternation with our filters for arithmetic constraints. However, the main aim of this paper is to provide an exhaustive and provably correct treatment of filtering algorithms supporting all special cases for all arithmetic constraints under all rounding modes.
This paper improves the state of the art in several directions:
all rounding modes are treated and there is no assumption that the rounding mode in effect is known and unchangeable (increased generality);
utilization, to a large extent, of machine floating-point arithmetic in the analyzer with few rounding mode changes (increased performance);
accurate treatment of round half to even —the default rounding mode of IEEE 754— (increased precision);
explicit and complete treatment of intervals containing symbolic values (i.e., infinities and signed zeros);
application of floating-point constraint propagation techniques to enable detection of program anomalies such as overflows, underflows, absorption, generation of NaNs.
1.6 Plan of the Paper
The rest of the paper is structured as follows: Section 2 recalls the required notions and introduces the notations used throughout the paper; Section 3 presents some results on the treatment of uncertainty on the rounding mode in effect and on the quantification of the rounding errors committed in floating-point arithmetic operations; Section 4 contains the complete treatment of addition and division constraints on intervals, by showing detailed special values tables and the refinement algorithms. Section 5 presents some experiments on constraint-based floating-point exception detection and concludes the main part of the paper. Appendix 0.A contains the complete treatment of subtraction and multiplication constraints. The proofs of all results presented in this paper can be found in Appendix 0.B.
We will denote by and the sets of strictly positive and strictly negative real numbers, respectively. The set of affinely extended reals, , is denoted by .
(IEEE 754 binary floating-point numbers.) A set of IEEE 754 binary floating-point numbers [IEE08] is uniquely identified by: , the number of significant digits (precision); , the maximum exponent, the minimum exponent being . The set of binary floating-point numbers includes:
all signed zero and non-zero numbers of the form , where
is the sign bit;
the exponent is any integer such that ;
the mantissa , with , is a number represented by a string of binary digits with a “binary point” after the first digit:
the infinities and ; the NaNs: (quiet NaN) and (signaling NaN).
The smallest positive normal floating-point number is and the largest is . The non-zero floating-point numbers whose absolute value is less than are called subnormal: they always have fewer than significant digits. Every finite floating-point number is an integral multiple of the smallest subnormal magnitude . Note that the signed zeroes and are distinct floating-point numbers. For a non-zero number , we will write (resp., ) to signify that the least significant digit of ’s mantissa, , is (resp., ).
In the sequel we will only be concerned with IEEE 754 binary floating-point numbers and we will write simply for when there is no risk of confusion.
(Floating-point symbolic order.) Let be any IEEE 754 floating-point format. The relation is such that, for each , if and only if both and are not NaNs and either: and , or and , or and , or and , or and . The partial order is such that, for each , if and only if both and are not NaNs and either or .
Note that without the NaNs is linearly ordered with respect to ‘’.
For that is not a NaN, we will often confuse the floating-point number with the extended real number it represents, the floats and both corresponding to the real number . Thus, when we write, e.g., we mean that is numerically less than (for example, we have though , but note that implies ). Numerical equivalence will be denoted by ‘’ so that , and all denote .
(Floating-point predecessors and successors.) The partial function is such that, for each ,
The partial function is defined by reversing the ordering, so that, for each , whenever is defined.
Let denote the usual arithmetic operations over the reals. Let denote the set of IEEE 754 rounding modes: round towards minus infinity (), round towards zero (), round towards plus infinity (), and round to nearest (). We will use the notation , where and , to denote an IEEE 754 floating-point operation with rounding .
The rounding functions are defined as follows. Note that they are not defined for : the IEEE 754 standard, in fact, for operation whose exact result is , bases the choice between and on the operation itself and on the sign of the arguments [IEE08, Section 6.3].
(Rounding functions.) The rounding functions defined by IEEE 754, , , and , are such that, for each ,
In this paper we use intervals of floating-point numbers in that are not NaNs.
(Floating-point intervals.) Let be any IEEE 754 floating-point format. The set of floating-point intervals with boundaries in is given by
denotes the set . is a bounded meet-semilattice with least element , greatest element , and the meet operation, which is induced by set-intersection, will be simply denoted by .
Floating-point intervals with boundaries in allow to capture the extended numbers in : NaNs should be tracked separately.
3 Rounding Modes and Rounding Errors
The IEEE 754 standard for floating-point arithmetic introduces different rounding operators, among which the user can choose on compliant platforms. The rounding mode in use affects the results of the floating-point computations performed, and it must be therefore taken into account during constraint propagation. In this section, we present some abstractions aimed at facilitating the treatment of rounding modes in our constraint projection algorithms.
3.1 Dealing with Uncertainty on the Rounding Mode in Effect
Even if programs that change the rounding mode in effect are quite rare, whenever this happens, the rounding mode in effect at each program point cannot be known precisely. So, for a completely general treatment of the problem, such as the one we are proposing, our choice is to consider a set of possible rounding modes. To this aim, in this section we define two auxilliary functions that, given a set of rounding modes possibly in effect, select a worst-case rounding mode that ensures soundness of interval propagation. Soundness is guaranteed even if the rounding mode used in the actual computation differs from the one selected, as far as the former is contained in the set. Of course, if a program never changes the rounding mode, the set of possible rounding modes boils down to be a singleton.
The functions presented in the first definition select the rounding modes that can be used to compute the lower (function ) and upper (function ) bounds of an operation in case of direct projections.
(Rounding mode selectors for direct projections.) Let be any IEEE 754 floating-point format and be a set of rounding modes. Let also and be such that either or . Then
The following functions select the rounding modes that will be used for the lower (functions and ) and upper (functions and ) bounds of an operation when computing inverse projections. Note that there are different functions depending on which one of the two operands is being projected: and for the right one, and for the left one.
(Rounding mode selectors for inverse projections.) Let be any IEEE 754 floating-point format and be a set of rounding modes. Let also and First, we define
Secondly, we define the following selectors:
The usefulness in interval propagation of the functions presented above will be clearer after considering Proposition 1. Moreover, it is worth noting that, if the set of possible rounding modes is composed by a unique rounding mode, then all the previously defined functions return such rounding mode itself. In that case, the claims of the next proposition trivially hold.
Let , , , and ‘’ be as in Definition 7. Let also and . Then, for each
Moreover, there exist such that
Now, consider with and . Let and , according to Definition 8. Moreover, let be the minimum such that , and let be the maximum such that . Then, the following inequalities hold:
The same result holds if , with and .
Thanks to Proposition 1 we need not be concerned with sets of rounding modes, as any such set can always be mapped to a pair of “worst-case rounding modes” which, in addition, are never round-to-zero. Therefore, projection functions can act as if the only possible rounding mode in effect was the one returned by the selection functions, greatly simplifying their logic. For example, consider the constraint , meaning “ is obtained as the result of for some .” Of course, implies and , which, by Proposition 1, imply and , where and . The results obtained by projection functions that only consider and are consequently valid for any .
3.2 Rounding Errors
For the precise treatment of all rounding modes it is useful to introduce a notation that expresses, for each floating-point number , the maximum error that has been committed by approximating with a real number under the different rounding modes (as shown in the previous section, we need not be concerned with round-to-zero).
(Rounding Error Functions.) The partial functions , , and are defined as follows, for each that is not a NaN:
An interesting observation is that the values of the functions introduced in Definition 9 are always representable in and thus their computation does not require extra-precision, something that, as we shall see, is exploited in the implementation. This is the reason why, for round-to-nearest, and have been defined as twice the approximation error bounds: the absolute value of the bounds themselves, being , is not representable in for each such that .
When the round-to-nearest rounding mode is in effect, Proposition 2 relates the bounds of a floating-point interval with those of the corresponding interval of it represents.
Let . Then
3.3 Real Approximations of Floating-Point Constraints
In this section we show how inequalities of the form and , with can be reflected on the reals. Indeed, it is possible to algebraically manipulate constraints on the reals so as to numerically bound the values of floating-point quantities. The results of this and of the next section will be useful in designing inverse projections.
The following implications hold, for each such that all the involved expressions do not evaluate to NaN, for each floating-point operation and the corresponding extended real operation , where the entailed inequalities are to be interpreted over :
|moreover, if ,|
|moreover, if ,|
3.4 Floating-Point Approximations of Constraints on the Reals
In this section, we show how possibly complex constraints involving floating-point operations can be approximated directly using floating-point computations, without necessarily using infinite-precision arithmetic.
Without being too formal, let us consider the domain of abstract syntax trees with leafs labelled by constants in and internal nodes labeled with a symbol in denoting an operation on the reals. While developing propagation algorithms, it is often necessary to deal with inequalities between real numbers and expressions described by such syntax trees. In order to successfully approximate them using the available floating-point arithmetic, we need two functions: and . These functions provide an abstraction of evaluation algorithms that: (a) respect the indicated approximation direction; and (b) are as precise as practical. Point (a) can always be achieved by substituting the real operations with the corresponding floating-point operations rounded in the right direction. For point (b), maximum precision can trivially be achieved whenever the expression involves only one operation; generally speaking, the possibility of efficiently computing a maximally precise (i.e., correctly rounded) result depends on the form of the expression (see, e.g., [KLLM09]).
(Evaluation functions.) The two partial functions and are such that, for each that evaluates on to a nonzero value,
Let be a non-NaN floating point number and an expression that evaluates on to a nonzero value. The following implications hold:
|In addition, if (or, equivalently, ) we also have|
|likewise, if (or, equivalently, ) we have|
4 Propagation for Simple Arithmetic Constraints
In this section we present our propagation procedure for the solution of floating-point constraints obtained from the analysis of programs engaging into IEEE 754 computations.
The general propagation algorithm, which we already introduced in Section 1.2, consists in an iterative procedure that applies the direct and inverse filtering algorithms associated with each constraint, narrowing down the intervals associated with each variable. The process stops when fixed point is reached, i.e., when a further application of any filtering algorithm does not change the domain of any variable.
4.1 Propagation Algorithms: Definitions
Constraint propagation is a process that prunes the domains of program variables by deleting values that do not satisfy any of the constraints involving those variables. In this section, we will state these ideas more formally.
Let and . Consider a constraint with , and .
Direct propagation aims at inferring a narrower interval for variable , by considering the domains of and . It amounts to computing a possibly refined interval for , , such that
Property (25) is known as the direct propagation correctness property.
Of course it is always possible to take , but the objective of the game is to compute a “small”, possibly the smallest enjoying (25), compatibly with the available information. The smallest that satisfies (25) is called optimal and is such that
Property (26) is called the direct propagation optimality property.
Inverse propagation, on the other hand, uses the domain of the result to deduct new domains for the operands, or . For the same constraint, , it means computing a possibly refined interval for , , such that
Property (27) is known as the inverse propagation correctness property. Again, taking is always possible and sometimes unavoidable. The best we can hope for is to be able to determine the smallest such set, i.e., satisfying
Property (28) is called the inverse propagation optimality property. Satisfying this last property can be very difficult.
4.2 The Boolean Domain for NaN
From now on, we will consider floating-point intervals with boundaries in . They allow to capture the extended numbers in only: NaNs (quiet NaNs and signaling NaNs) should be tracked separately. To this purpose, a Boolean domain , where stands for “may be NaN” and means “cannot be NaN”, can be used and coupled with arithmetic filtering algorithms.
Let be an arithmetic constraint over floating-point numbers, and , and be the variable domains of and respectively. In practice, the propagation process for such a constraint reaches a fixed point when the combination of refining domains , and remains the same obtained in the previous iteration. For each iteration of the algorithm we analyze the NaN domain of all the constraint variables in order to define the next propagator action.
4.3 Filtering Algorithms for Simple Arithmetic Constraints
Filtering algorithms for arithmetic constraints are the main focus of this paper. In the next sections, we will propose algorithms realizing optimal direct projections and correct inverse projections for the addition () and division () operations. The reader interested in implementing constraint propagation for all four operations can find the algorithms and results for the missing operations in Appendix 0.A.
The filtering algorithms we are about to present are capable of dealing with any set of rounding modes and are designed to distinguish between all different (special) cases in order to be as precise as possible, especially when the variable domains contain symbolic values. Much simpler projections can be designed whenever precision is not of particular concern. Indeed, the algorithms presented in this paper can be considered as the basis for finding a good trade-off between efficiency and the required precision.
Here we deal with constraints of the form with . Let , and .
Thanks to Proposition 1, any set of rounding modes can be mapped to a pair of “worst-case rounding modes” which, in addition, are never round-to-zero. Therefore, the projection algorithms use the selectors presented in Definition 7 to chose the appropriate worst-case rounding mode, and then operate as if it was the only one in effect, yielding results implicitly valid for the entire set .
For direct propagation, i.e., the process that infers a new interval for starting from the interval for and , we propose Algorithm 1 and functions and , as defined in Figure 2. Functions and yield new bounds for interval . In particular, function gives the new lower bound, while function provides the new upper bound of the interval. Functions and handle all rounding modes and, in order to be as precise as possible, they distinguish between several cases, depending on the values of the bounds of intervals and . These cases are infinities ( and ), zeroes ( and ), negative values () and positive values ().