On the use of the Infinity Computer architecture to set up a dynamic precision floating-point arithmetic

02/24/2020 ∙ by Pierluigi Amodio, et al. ∙ University of Bari Aldo Moro UNIFI 0

We devise a variable precision floating-point arithmetic by exploiting the framework provided by the Infinity Computer. This is a computational platform implementing the Infinity Arithmetic system, a positional numeral system which can handle both infinite and infinitesimal quantities symbolized by the positive and negative finite powers of the radix grossone. The computational features offered by the Infinity Computer allows us to dynamically change the accuracy of representation and floating-point operations during the flow of a computation. When suitably implemented, this possibility turns out to be particularly advantageous when solving ill-conditioned problems. In fact, compared with a standard multi-precision arithmetic, here the accuracy is improved only when needed, thus not affecting that much the overall computational effort. An illustrative example about the solution of a nonlinear equation is also presented.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Arithmetic of Infinity was introduced by Y.D. Sergeyev with the aim of devising a new coherent computational environment able to handle finite, infinite and infinitesimal quantities, and to execute arithmetical operations with them. It is based on a positional numeral system with the infinite radix , called grossone and representing, by definition, the number of elements of the set of natural numbers (see, for example, Sergeyev (2008, 2009) and the survey paper Sergeyev (2017)). Similar to the standard positional notation for finite real numbers, a number in this system is recorded as

with the obvious meaning

(1)

The coefficients , called grossdigits, are real numbers while the grosspowers , sorted in decreasing order

may be finite, infinite or infinitesimal even though, for our purposes, only finite integer grosspowers will be considered.

Notice that, since by definition, the set of real numbers and the related operations are naturally included in this new system. In this respect, the Arithmetic of Infinity should be perceived as a more powerful tool that improves the ability of observing and describing mathematical outcomes that the standard numeral system could not properly handle. In particular, the new system allows us to better inspect the nature of the infinite objects we are dealing with. For example, while in the standard thinking, if we are in the position to specify as, say , the kind of infinity we are observing using the new methodology, such an equality could be better replaced with . According to the principle that the part is less than the whole, this novel perception of the infinite dimensionality has proved successful in resolving a number of paradoxes involving infinities and infinitesimals, the most famous being Hilbert’s paradox of the Grand Hotel (see Sergeyev (2008, 2009)).

The Arithmetic of Infinity paradigm is rooted in three methodological postulates and its consistency has been rigorously recognized in Lolli (2015). Its theoretical and practical implications are formidable also considering that the final goal is to make the new computing system available through a dedicated processing unit. The computational device that implements the Infinity Arithmetic has been called Infinity Computer and is patented in EU, USA, and Russia (see, for example, Y.D. (2010)).

Among the many fields of research this new methodology has been successfully applied, we mention numerical differentiation and optimization De Cosmis and Leone (2012); Sergeyev (2011); Žilinskas (2012), numerical solution of differential equations Sergeyev (2013); Amodio et al. (2016); Sergeyev et al. (2016); Mazzia,F. et al. (2016); Iavernaro et al. (2019), models for percolation and biological processes Vita,M.C. et al. (2012); Iudin and Sergeyev,Y.D. (2012), cellular automata Iudin et al. (2015); D’Alotto,L. (2015).111For further references and applications see the survey Sergeyev (2017).

The aim of the present study is to devise a dynamic precision floating-point arithmetic by exploiting the computational platform provided by the Infinity Computer. In contrast with standard variable precision arithmetics, here not only may the accuracy be dynamically changed during the execution of a given algorithm, but variables stored with different accuracies may be combined through the usual algebraic operations. This strategy is explored and addressed to the accurate solution of ill-conditioned/unstable problems Brugnano,L. et al. (2011); Iavernaro,F. et al. (2006).

One interesting application is the possibility of handling ill-conditioned problems or even of implementing algorithms which are labeled as unstable in standard floating-point arithmetic.222First results on handling ill-conditioning using the Infinity Computer may be found in Gaudioso et al. (2018); Sergeyev et al. (2018). One example in this direction has been illustrated in Amodio et al. (2020). It consists in the use of the iterative refinement to improve the accuracy of a computed solution to an ill-conditioned linear system until a prescribed input accuracy is achieved.

The paper is organized as follows. In the next section we highlight those features of the Infinity Computer that play a key role to set up the variable-precision arithmetic. This latter is discussed in Section 3 together with a few illustrative examples. As an application in Numerical Analysis, in Section 4 we consider the problem of finding the zero of a nonlinear function affected by ill-conditioning issues. Finally, some conclusions are drawn in Section 5.

2 Background

As is the case with the standard floating-point arithmetic, the Infinity Computer handles both numbers and operations numerically (not symbolically). Consequently, it is prone to efficiently afford the massive amount of computation needed while solving a wide variety of real-life problems. On the other hand, a roundoff error proportional to the machine accuracy is generated during representation of data (i.e. the coefficients and in (1)) and execution of the basic operations. We will give a more detailed description about how the representation of grossdigits and the floating-pont operations should be carried out in the next section. Here, for sake of simplicity, we will neglect these sources of errors.

The grossnumbers that will be considered in the sequel are those that admit an expansion in terms of integer powers of and, thus, take the form

(2)

where denotes the maximum order of infinitesimal appearing in . For this special set, the arithmetic operations on the Infinity Computer follow the same rules defined for the polynomial ring. For example, given the two grossnumbers

(3)

we get

and analogously for the division . Notice that, on the Infinity Computer, variables may coexist with different storage requirements. Taking aside the (negative) powers of that, as we will see, need not to be stored in our usage, the variable displays infinitesimals quantities up to the order , thus requiring one extra record to store the grossdigit , if compared with the variable

that only contains a first order infinitesimal. This circumstance also influences the computational complexity associated with each single floating-point operation. As a consequence of the different amount of memory allocated for storing grossnumbers, the global computational complexity associated with a given algorithm performed on the Infinity Computer, cannot be merely estimated in terms of how many flops are executed, but should also take into account how many grossdigits are involved in each operation.

If is chosen as in (2), we denote by its section obtained by neglecting, in the sum, all the infinitesimals of order greater than , that is

(4)

For example, choosing and and as in (3), we see that and would resemble the floating-point addition and multiplication in standard arithmetic, respectively, while additional effort is needed if other powers of are successively involved. More precisely, the computational cost associated with a single operation of two grossnumbers will depend on how many infinitesimal are considered. Assuming and denoting by the grossdigits associated with , for the two sections and , the addition

(5)

requires additions of grossdigits, while the multiplication

(6)

amounts to multiplications and additions/subtractions of grossdigits.333The division algorithm is described in Section 3 and therefore is not discussed here. It is worth noticing that, since in both operations all the coefficients of may be independently calculated, there is room for a huge parallelization. We will not consider this aspect in detail in the present study.

3 A variable-precision representation of floating-point numbers on the Infinity Computer

Grossnumbers of the form (2) and their sections (4) form the basis of the new floating-point arithmetic where numbers with a different accuracy may be simultaneously represented and combined. The idea is to let and its powers act as machine infinitesimal quantities when related to the classical floating-point system. These infinitesimal entities, if suitably activated or deactivated, may be conveniently exploited to increase or decrease the required accuracy during the flow of a given computation. This strategy may be used to automatically detect ill-conditioning issues during the execution of a code that solves a given problem, and to change the accuracy accordingly, in order to optimize the overall computational effort under the constrain that the resulting error in the output solution should fit a given input tolerance. A formal introduction of the new dynamic precision arithmethic is discussed hereafter.

3.1 Machine numbers and their storage in the Infinity Computer

Let and be two given non-negative integers and . The set of machine numbers we are interested in is given by

(7)

where denotes the base of the numeral system, the integer is the exponent ranging in a given finite interval, and are the significant digits, with (normalization condition). Starting from , we group the digits in adjacent strings each of length :

(8)

The representation of the numbers as in (7), under the shape (8), suggests an interesting application of the Infinity Computer. Introducing the new symbol , called dark grossone, as

(9)

and setting

(10)

the number in (8) may be rewritten as

(11)

Its section of index is then given by

(12)

We assume that a real number is represented by a floating-point number in the form (11) by truncating or rounding it to the nearest even, after the digit . This is the most attainable accuracy during the data representation phase but, in general, a lower accuracy (and hence faster execution times) will be required while processing the data, which will be achieved by involving sections of of suitable indices during the computations.

Echoing the symbol , the new symbol emphasizes the formal analogy between a machine number and a grossnumber (compare (11) with (2) and (12) with (4)). This correspondence suggests that the computational platform provided by the Infinity Computer may be conveniently exploited to host the set defined at (7) and to execute operations on its elements using a novel methodology. This is accomplished by formally identifying the two symbols, which means that, though they refer to two different definitions, they are treated in the same way in relation to the storage and execution of the basic operations. In accord with the features outlined in Section 2, the Infinity Computer will then be able to:

  • store floating-point numbers at different accuracy levels, by involving different infinitesimal terms, according to the need;

  • easily access to sections of floating-point numbers as defined in (12);

  • perform computations involving numbers stored with different accuracies.

The affinity between the meaning of the two symbols goes even beyond what has been stated above. We have already observed that the case in (12) resembles the standard set of floating-point numbers with significant figures. This means that when the Infinity Computer works with numbers of the form it precisely matches the structure designed following the principles of the IEEE 754 standard. In this mode, the operational accuracy is set at its minimum value and the upper bound on the relative error due to rounding (unit roundoff) is . In other words, will be perceived as an infinitesimal entity which cannot be handled unless we let numbers in the form come into play. This argument can then be iterated to involve , . Mimicking the same concept expressed by the use of , negative powers of act like lenses to observe and combine numbers using different accuracy levels.

Remark 1

What about the role of as an infinite-like quantity? Consider again the basic operational mode with numbers in the form . If we ask the computer to count integer numbers according to the scheme

n=0
while n+1>n
   n=n+1
end

it would stop at , yielding a further similarity with the definition of in the Arithmetic of Infinity. Again, involving sections of higher index, the counting process could be safely continued.

In conclusion, the role of could be interpreted as an inherent feature of the machine architecture which, consistently with the Infinity Arithmetic methodology, could activate suitable powers of to get, when needed, a better perception of numbers. The examples included in the sequel further elucidate this aspect.

3.2 Floating-point operations

We have seen that, through the formal identification of with , it is possible to store the elements of as if they were grossnumbers and, consequently, to take advantages of the facilities provided by the Infinity Computer in accessing their sections and performing the four basic operations on them, according to the rules described in Section 2 (see, for example, (5) and (6)). For these reasons, in the sequel, we shall use in place of when working on the Infinity Computer, even though, due to the finite nature of , the result of a given operation may not be in the form (12), so that a normalization procedure has to be considered. Hereafter, we report a few examples in order to elucidate this aspect. For all cases, a binary base has been adopted for data representation.

Addition.

Set and (three grossdigits each with four significant digits), and consider the sum of the two floating-point normalized numbers:

Table 1 summarizes the procedure by a sequence of commented steps.

Table 1: Scheme of the addition of two positive floating-point numbers.

First of all, the two numbers are stored in memory by distributing their digits along the powers , and (step (a)). Before summing the two numbers, an alignment is performed to make the two exponents equal (step (b)). Notice that shifting to the right the digits of the second number causes a redistribution of the digits along the three mantissas. Step (c) performs a polynomial-like sum of the two numbers. The contribution of each term has to be consistently redistributed (step (d)), in order to take into account possible carry bits, and the three mantissas accordingly updated (step (e)). Steps (f) and (g) conclude the computation by normalizing and rounding the result.

Subtraction.

As usual, floating-point subtraction between two numbers sharing the same sign is performed by inverting the sign bit of the second number, converting to 2’s complement its mantissa, and then performing the addition as outlined above. It is well-known that subtracting two close numbers may lead to cancellation issues. We consider an example where the accuracy may be dynamically changed in order to overcome ill-conditioning issues. We assume to work with the arithmetic resulting by setting and (four grossdigits each consisting of one byte) with truncation. It turns out that, for a floating-point number representing an input real number , its section may be interpreted as the single precision representation of , while , and are its double, triple and quadruple precision approximations respectively. Loss of accuracy, resulting from a subtraction between two numbers having the same sign, will be detected during the normalization phase, when it requires shifting the mantissa by a large number of bits.

Consider the simple problem of evaluating the function that computes the sum of three real numbers, and assume that the user requires a simple precision accuracy in the result. In the examples below, we discuss three different situations.

Example 1

The three real numbers

are represented on the Infinity Computer as

Since we are adding positive numbers, no control on the accuracy is needed here, and the result is yielded as

with a relative error , as is expected in simple precision.

Example 2

Given the three real numbers defined in the previous example, we want now to evaluate again requiring an eight-bit accuracy in the result. Table 2 shows the sequence of steps performed to achieve the desired result.

Table 2: Avoiding cancellation issues when evaluating the function for the input data in Example 2.

The computation in simple precision, as in the previous example, is described in step (a): it leads to a clear cancellation phenomenon and, once detected, the accuracy is improved by letting the terms enter into play (step (b)). However, the relative error remains higher than the prescribed tolerance, and accuracy needs to be improved by also considering the terms. The computation is then repeated at step (c) and the correct result is finally achieved. Notice that, in performing steps (b) and (c), one can evidently exploit the work already carried out in the previous step. The overall procedure thus requires 6 additions/sutractions of grossdigits, the same that would be needed by directly working with a 24-bit register which, for this case, is the minimum accuracy requirement to obtain eight correct bit in the result. This means that no extra effort is introduced during the steps. As a further remark, we stress again that a parallelization through the steps is also possible, even though we will not discuss this issue here.

Example 3

We want to evaluate requiring an eight-bit accurate result, now choosing

Table 3 shows the sequence of steps performed to achieve the desired result for this case.

Table 3: Avoiding cancellation issues when evaluating the function for the input data in Example 3.

When working in simple precision, an accuracy improvement is already needed when subtracting the first two terms and and, consequently, step (a) is stopped. At step (b), the difference is evaluated in double precision which, on balance, assures an eight-bit accuracy in the result. However, a new cancellation issue emerges when subtracting from , suggesting that the two terms need to be represented more accurately. This is done in step (c), evaluating in triple precision and representing in double precision. The overall procedure requires 5 additions/sutractions of grossdigits. This example, compared with the previous one, reveals the coexistence of variables combined with different precisions.

Summarizing the three examples above, we observe how the accuracy of representation and combination of variables may be dynamically changed, in order to overcome possible loss of significant figures in the result when evaluating a function. Of course, for this strategy to work, it is necessary that the input data are stored with high precision and a technique to detect the loss of accuracy be available. In Section 4 we will illustrate this procedure applied to the accurate determination of zeros of functions (a further example may be found in Amodio et al. (2020)).

Concerning the computational complexity, it should be noticed that Example 1 reflects the normal situation where the use of the standard precision is enough to produce a correct result, while Examples 2 and 3 highlight less frequent events.

Multiplication.

Set and (three grossdigits each with four significant digits). Consider the product of the two floating-point normalized numbers

Table 4 summarizes the procedure by a sequence of commented steps.

Table 4: Scheme of the multiplication of two floating-point numbers.

After expanding the input data along the negative powers of for data storage (step (a)), the convolution product described in (6) is performed (step (b)). At step (c), the contribution of each term is redistributed, and a sum is then needed to update the mantissas (step (d)). Steps (e) and (f) conclude the computation by normalizing and rounding the result. Notice that step (e) may be carried out by applying the rules for the addition described in Table 1. Again, we stress that the terms in the convolution product, as well as in the subsequent sum, may be computed in parallel.

Division.

The division of two floating-point numbers and has been switched to the multiplication of by the reciprocal of . This latter, in turn, is obtained with the aid of the Newton-Raphson method applied to find the zero of the function . Hereafter, without loss of generality, we assume . Starting from a suitable initial guess , the Newton iteration then reads

(13)

The relative error

satisfies

(14)

which means that, as is expected in presence of simple zeros, the sequence eventually converges quadratically to , and the number of correct figures doubles at each iteration. This feature makes the division procedure extremely efficient in our context, since the required accuracy may be easily increased to an arbitrary level. In order to obtain such a good convergence rate starting from the very beginning of the sequence, the numerator and denominator are scaled by a suitable factor so that lies in the interval . In the literature, the minmax linear polynomial approximation is often used to estimate the reciprocal of . The resulting initial guess is

which assures an initial error . Taking into account the equality (14), the relative error at step decreases as

and consequently, assuming , a -bit accurate approximation is obtained by setting

where denotes the ceiling function. As an example, four iterations suffice to get an approximation with at least correct digits. Table 5 shows the sequence generated from the scheme above applied to find, on the Infinity Computer, the reciprocal of the binary number ( in decimal base), under the choice and (eight grossdigits each with four significant figures).

Table 5: Newton iteration to compute the reciprocal of on the Infinity Computer.

3.3 Implementation details

We have developed a Matlab prototype emulating the Infinity Computer environment interfaced with a module that performs the suitable carrying, normalization and rounding processes, needed by the identification of and to ensure proper functioning of the resulting dynamic floating-point arithmetic.

The emulator represents input real numbers using a set of binary grossdigits, whose length and number are defined by the two input parameters and . This latter parameter is used to define the maximum available accuracy for storing variables. In accord with formulae such as (5) and (6), the actual accuracy used to execute a single operation will depend on the accuracy of the two operands but cannot exceed .

At the moment, the emulator implements the four basic operations following the strategies described above, plus some simple functions. The vectorization issue, to speed-up the execution time associated with each floating-point operation, has not yet been addressed, so that all operations between grossdigits are executed sequentially.

All computations reported in the present paper, including the results presented in the next section, have been carried out on an Intel i5 quad-core computer with 16GB of memory, running Matlab R2019b.

4 A numerical illustration

As an application highlighting the potentialities of the dynamic precision arithmetic introduced above, we consider the problem of determining accurate approximations of the zeros of a function , in the case where this problem suffers from ill-conditioning issues.

The finite arithmetic representation of the function introduces perturbation terms of different nature: analytical errors, errors in the coefficients or parameters involved in the definition of the function, or roundoff errors introduced during its evaluation.

From a theoretical point of view, these sources of errors may be accounted for by introducing a perturbation function and analyzing its effects on the zeros of the perturbed function where the factor has the size of the unit roundoff. Under regularity assumptions on , if is a zero with multiplicity , it turns out that admits a perturbed zero , with the perturbing term satisfying, in first approximation,

(15)

As an example, consider the polynomial

(16)

that admits as unique root with multiplicity (indeed ). For this problem, from formula (15) we get

(17)

Working with 64-bit IEEE arithmetic, i.e. with a roundoff unit , we expect a breakdown of the relative error proportional to , so that, assuming , the approximation of the zero only contains correct figures.

This is confirmed by the two plots in Figure 1. They display the relative error of the approximations to generated by applying the Newton method to the problem , choosing as initial guess:

(18)

The solid line refers to the implementation of the iteration on the Infinity Computer using and . This choice mimics the default double precision arithmetic in Matlab, which uses a register of 64 bit to store a normalized binary number, 52 bit being dedicated to the (fractional part of the) mantissa. As a matter of fact, the dashed line, coming out from the implementation of the scheme using the standard Matlab arithmetic, precisely overlap with the solid line as long as the error decreases, while the two lines slightly depart from each other when they reach the saturation level right below , namely starting from step .

Figure 1: Relative error related to the sequence of approximations generated by the Newton method applied to the polynomial (16). Solid line: implementation on the Infinity Computer with and . Dashed line: implementation in Matlab double precision arithmetic.

We want now to improve the accuracy of the approximation to the zero of (16) by exploiting the new computational platform. Hereafter, the -bit precision used above will be referred to as single precision. The dashed lines in Figure 2 show the relative error reduction when the Newton method is implemented on the Infinity Computer by working with multiple fixed precision. From top to bottom, we can see the five saturation levels corresponding to the stagnation of the error at in single precision, in double precision, in triple precision, in quadruple precision, and in quintuple precision. These saturation values are consistently predicted by formula (17), after replacing with , for .

Figure 2: Relative error corresponding to the sequence of approximations generated by the Newton method applied to the polynomial (16) on the Infinity Computer. Solid line: dynamic precision implementation. Dashed line: fixed precision implementation, for different accuracies.

Now suppose we want correct binary digits in the approximation (i.e., about correct decimal digits). From the discussion above, it turns out that we have to activate the quintuple precision, thus setting and (five grossdigits, each consisting of a 53-bit register). However, the computational effort may be significantly reduced if we increase the accuracy by involving new negative grosspowers only when they are really needed. In a dynamic usage of the accuracy, starting from , we can initially activate the single precision mode until we reach the first saturation level and, thereafter, switch to double precision until the second saturation level is reached, and so forth until we get the desired accuracy in the approximation. Denoting by

the estimated error at step , and by prec the current precision, initially set equal to , the points where an increase of the accuracy is needed may be automatically detected by employing a simple control scheme such as

if err(k)>=s*err(k-1) and prec <=T
   prec=prec+1
end

where is a positive safety factor that we have set equal to . The solid line in Figure 2 shows the corresponding reduction of the error and we can see that the change of precision scheme described above works quite well for this example, since all saturation levels are correctly detected and overcome. At step the error reaches its minimum value of and the iteration could be stopped by the standard criterion even though, for clarity, we have generated additional points to reveal the last saturation level corresponding to prec.

Now, let us compare the computational cost of the dynamic implementation versus the fixed quintuple precision one, considering that to reach the highest precision each mode requires Newton iterations (see Figure 2). On the basis of the formula reported right below (6), the dynamic implementation would take about grossdigits multiplications while the fixed quintuple precision implementation requires grossdigits multiplications.444For simplicity, we do not consider additions/subtractions in the computation, since their contribution would not alter the final result. It follows that the former mode would reduce the execution times of a factor at least eight with respect to the latter. Actually, it does much better: the dynamic usage of variables and operations, understood as the ability of handling variables with different accuracy and executing operations on them, makes the resulting arithmetic definitely much more efficient than what emerged from the comparison above.

Table 6: The Horner method for evaluating in (16) at .

In carrying out the computation above, for the dynamic precision mode we have assumed that all floating-point operations were executed with the current selected precision. For example, under this assumption, the computational effort per step of the two modes would become equivalent starting from step onwards since, at that step, the dynamic mode activates the quintuple precision to overcome the threshold level in Figure 2.

There is, however, one fundamental aspect that we have not yet considered. In fact, to overcome the ill-conditioning of the problem, the higher precision is only needed during the evaluation of and in (18), while the single -bit precision is enough to handle the sequence . In other words, to minimize the overall computational effort, we may improve the accuracy only in the part of the code that implements the Horner rule to evaluate the polynomial and its derivative.

Interestingly, we have not to instruct the Infinity Computer to switch between single and quintuple precision: all is done automatically and naturally and, more importantly, even during the evaluation of and , the transition from single to quintuple precision is gradual, in that all the intermediate precisions are actually involved only when really needed, which makes the whole machinery much more efficient.

To better elucidate this aspect, we illustrate the sequence produced by the Horner rule to evaluate at step , where the quintuple precision is activated. The first column in Table 6 reports the five steps of the Horner method applied to evaluate the polynomial in (16) at the floating-point single precision number (its value is in the caption of the table). The variable is initializated with the leading coefficient of the polynomial, but is allowed to store five grossdigits, each -bit long, to host floating-point numbers up to quintuple precision. From the table we see that, as the iteration scheme proceeds, new negative grosspowers appear in the values taken by the variable . More precisely, at step the variable stores a -fold precision floating-point number, for .

The increase in the precision of one unit at each step evidently arises from the product , since remains a single-precision variable and no rounding occurs. Let us better examine what happens at the last step. The product generates a quintuple-precision number whose expansion along negative grosspowers matches the number up to . Consequently, the last operation only contains significant digits in the coefficient of so that, after normaliziation, will store again a single-precision number that can be consistently combined inside formula (18).

In conclusion, the Horner procedure, though being enabled to operate in quintuple precision, actually involves lower precision numbers, except at the very last step. The five steps reported in Table 6 require multiplications of grossdigits, with a clear saving of time, if we consider that the fixed quintuple-precision mode would require multiplications of grossdigits. Comparing the execution times in Matlab over steps, we found out that the dynamic-precision implementation is about times slower than the single-precision implementation (which however stagnates at level ) and about times faster than the quintuple precision mode, thus confirming the expected efficiency.

5 Conclusions

We have proposed a variable precision floating-point arithmetic able to simultaneously storing numbers and execute operations with different accuracies. This feature allows one to dynamically change the accuracy during the execution of a code, in order to prevent inherent ill-conditioning issues associated with a given problem. In this context, the Infinity Computer has been recognized as a natural computational environment that can easily host such an arithmetic. The assumption that makes this paradigm work is the identification of the two symbols and . The latter, defined as , is evidently a finite quantity for our numeral system but, in many respects, its reciprocal behaves as an infinitesimal-like entity in the numeral system induced by a floating-point arithmetic operating with significant figures. In the same spirit of the Infinity Computer, it turns out that negative powers of may be used as “lenses” to increase and decrease the accuracy when needed. An emulator of this dynamic precision floating-point arithmetic has been developed in Matlab, and an application to the accurate solution of (possibly ill-conditioned) scalar nonlinear equations has been discussed.

Acknowledgements.
This work was funded by the INdAM-GNCS 2018 Research Project “Numerical methods in optimization and ODEs” (the authors are members of the INdAM Research group GNCS).

References

  • P. Amodio, L. Brugnano, F. Iavernaro, and F. Mazzia (2020) A dynamic precision floating-point arithmetic based on the Infinity Computer framework. Lecture Notes in Comput. Sci. 11974, pp. 289–297. External Links: Document Cited by: §1, §3.2.
  • P. Amodio, F. Iavernaro, F. Mazzia, M. S. Mukhametzhanov, and Ya. D. Sergeyev (2016) A generalized Taylor method of order three for the solution of initial value problems in standard and infinity floating-point arithmetic. Math. Comput. Simulation 141, pp. 24–39. Cited by: §1.
  • Brugnano,L., Mazzia,F., and Trigiante,D. (2011) Fifty years of stiffness. Recent Advances in Computational and Applied Mathematics, pp. 1–21. External Links: Document Cited by: §1.
  • D’Alotto,L. (2015) A classification of one-dimensional cellular automata using infinite computations. Appl. Math. Comput. 255 (), pp. 15–24. Cited by: §1.
  • S. De Cosmis and R. D. Leone (2012) The use of grossone in mathematical programming and operations research. Appl. Math. Comput. 218 (16), pp. 8029–8038. Cited by: §1.
  • M. Gaudioso, G. Giallombardo, and M. S. Mukhametzhanov (2018) Numerical infinitesimals in a variable metric method for convex nonsmooth optimization. Appl. Math. Comput. 318, pp. 312–320. Cited by: footnote 2.
  • F. Iavernaro, F. Mazzia, M. S. Mukhametzhanov, and Y. D. Sergeyev (2019) Conjugate-symplecticity properties of Euler–Maclaurin methods and their implementation on the Infinity Computer. Appl. Numer. Math.. External Links: Document Cited by: §1.
  • Iavernaro,F., Mazzia,F., and Trigiante,D. (2006) Stability and conditioning in numerical analysis. Journal of Numerical Analysis, Industrial and Applied Mathematics 1 (1), pp. 91–112. Cited by: §1.
  • D.I. Iudin, Sergeyev,Y.D., and Hayakawa,M. (2015) Infinity computations in cellular automaton forest-fire model. Commun. Nonlinear Sci. Numer. Simul. 20 (3), pp. 861–870. Cited by: §1.
  • D.I. Iudin and Sergeyev,Y.D. (2012) Interpretation of percolation in terms of infinity computations. Appl. Math. Comput. 218 (16), pp. 8099–8111. Cited by: §1.
  • G. Lolli (2015) Metamathematical investigations on the theory of grossone. Appl. Math. Comput. 255, pp. 3–14. Cited by: §1.
  • Mazzia,F., Sergeyev,Y.D., Iavernaro,F., Amodio,P., and Mukhametzhanov,M.S. (2016) Numerical methods for solving ODEs on the Infinity Computer. In 2nd International Conference on Numerical Computations: Theory and Algorithms, NUMTA 2016, Vol. 1776, pp. 090033. Cited by: §1.
  • Y.D. Sergeyev (2017) Numerical infinities and infinitesimals: Methodology, applications, and repercussions on two Hilbert problems. EMS Surveys in Mathematical Sciences 4 (2), pp. 219–320. Cited by: §1, footnote 1.
  • Ya. D. Sergeyev, M. S. Mukhametzhanov, F. Mazzia, F. Iavernaro, and P. Amodio (2016) Numerical methods for solving initial value problems on the Infinity Computer. International Journal of Unconventional Computing 12 (1), pp. 3–23. Cited by: §1.
  • Ya. D. Sergeyev (2013)

    Solving ordinary differential equations by working with infinitesimals numerically on the Infinity Computer

    .
    Appl. Math. Comput. 219 (22), pp. 10668–10681. Cited by: §1.
  • Ya.D. Sergeyev, D.E. Kvasov, and M. S. Mukhametzhanov (2018) On strong homogeneity of a class of global optimization algorithms working with infinite and infinitesimal scales. Commun. Nonlinear Sci. Numer. Simul. 59, pp. 319–330. Cited by: footnote 2.
  • Ya.D. Sergeyev (2008) A new applied approach for executing computations with infinite and infinitesimal quantities. Informatica 19 (4), pp. 567–596. Cited by: §1, §1.
  • Ya.D. Sergeyev (2011) Higher order numerical differentiation on the Infinity Computer. Optimization Letters 5 (4), pp. 575–585. Cited by: §1.
  • Y. D. Sergeyev (2009) Numerical computations and mathematical modelling with infinite and infinitesimal numbers. J. Appl. Math. Comput. 29 (1-2), pp. 177–195. Cited by: §1, §1.
  • Vita,M.C., D. Bartolo,S., Fallico,C., and V. M. (2012) Usage of infinitesimals in the Menger’s Sponge model of porosity. Appl. Math. Comput. 218 (16), pp. 8187–8196. Cited by: §1.
  • S. Y.D. (2010) Computer system for storing infinite, infinitesimal, and finite quantities and executing arithmetical operations with them. Note: USA patent 7,860,914 Cited by: §1.
  • A. Žilinskas (2012) On strong homogeneity of two global optimization algorithms based on statistical models of multimodal objective functions. Applied Mathematics and Computation 218 (16), pp. 8131–8136. Cited by: §1.