DeepAI AI Chat
Log In Sign Up

Accelerated Polynomial Evaluation and Differentiation at Power Series in Multiple Double Precision

by   Jan Verschelde, et al.

The problem is to evaluate a polynomial in several variables and its gradient at a power series truncated to some finite degree with multiple double precision arithmetic. To compensate for the cost overhead of multiple double precision and power series arithmetic, data parallel algorithms for general purpose graphics processing units are presented. The reverse mode of algorithmic differentiation is organized into a massively parallel computation of many convolutions and additions of truncated power series. Experimental results demonstrate that teraflop performance is obtained in deca double precision with power series truncated at degree 152. The algorithms scale well for increasing precision and increasing degrees.


page 13

page 14


GPU Accelerated Newton for Taylor Series Solutions of Polynomial Homotopies in Multiple Double Precision

A polynomial homotopy is a family of polynomial systems, typically in on...

Least Squares on GPUs in Multiple Double Precision

This paper describes the application of the code generated by the CAMPAR...

Parallel Software to Offset the Cost of Higher Precision

Hardware double precision is often insufficient to solve large scientifi...

Fresnel Integral Computation Techniques

This work is an extension of previous work by Alazah et al. [M. Alazah, ...

Proposal for a High Precision Tensor Processing Unit

This whitepaper proposes the design and adoption of a new generation of ...

Fast Computation of the Roots of Polynomials Over the Ring of Power Series

We give an algorithm for computing all roots of polynomials over a univa...

Multiplier with Reduced Activities and Minimized Interconnect for Inner Product Arrays

We present a pipelined multiplier with reduced activities and minimized ...