Iterative Methods at Lower Precision

10/07/2022
by   Yizhou Chen, et al.
0

Since numbers in the computer are represented with a fixed number of bits, loss of accuracy during calculation is unavoidable. At high precision where more bits (e.g. 64) are allocated to each number, round-off errors are typically small. On the other hand, calculating at lower precision, such as half (16 bits), has the advantage of being much faster. This research focuses on experimenting with arithmetic at different precision levels for large-scale inverse problems, which are represented by linear systems with ill-conditioned matrices. We modified the Conjugate Gradient Method for Least Squares (CGLS) and the Chebyshev Semi-Iterative Method (CS) with Tikhonov regularization to do arithmetic at lower precision using the MATLAB chop function, and we ran experiments on applications from image processing and compared their performance at different precision levels. We concluded that CGLS is a more stable algorithm, but overflows easily due to the computation of inner products, while CS is less likely to overflow but it has more erratic convergence behavior. When the noise level is high, CS outperforms CGLS by being able to run more iterations before overflow occurs; when the noise level is close to zero, CS appears to be more susceptible to accumulation of round-off errors.

READ FULL TEXT

page 6

page 7

page 8

page 9

page 13

page 15

page 17

page 18

research
10/01/2019

ARCHITECT: Arbitrary-precision Hardware with Digit Elision for Efficient Iterative Compute

Many algorithms feature an iterative loop that converges to the result o...
research
07/24/2019

Exploiting variable precision in GMRES

We describe how variable precision floating point arithmetic can be used...
research
07/28/2019

Finite-Precision Implementation of Arithmetic Coding Based Distribution Matchers

A distribution matcher (DM) encodes a binary input data sequence into a ...
research
04/25/2019

Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ODEs

Although double-precision floating-point arithmetic currently dominates ...
research
04/12/2019

Leveraging the bfloat16 Artificial Intelligence Datatype For Higher-Precision Computations

In recent years fused-multiply-add (FMA) units with lower-precision mult...
research
06/16/2020

Digit Stability Inference for Iterative Methods Using Redundant Number Representation

In our recent work on iterative computation in hardware, we showed that ...
research
12/26/2022

Improved Laguerre Spectral Methods with Less Round-off Errors and Better Stability

Laguerre polynomials are orthogonal polynomials defined on positive half...

Please sign up or login with your details

Forgot password? Click here to reset