DeepAI AI Chat
Log In Sign Up

Accelerated Polynomial Evaluation and Differentiation at Power Series in Multiple Double Precision

01/22/2021
by   Jan Verschelde, et al.
0

The problem is to evaluate a polynomial in several variables and its gradient at a power series truncated to some finite degree with multiple double precision arithmetic. To compensate for the cost overhead of multiple double precision and power series arithmetic, data parallel algorithms for general purpose graphics processing units are presented. The reverse mode of algorithmic differentiation is organized into a massively parallel computation of many convolutions and additions of truncated power series. Experimental results demonstrate that teraflop performance is obtained in deca double precision with power series truncated at degree 152. The algorithms scale well for increasing precision and increasing degrees.

READ FULL TEXT

page 13

page 14

01/30/2023

GPU Accelerated Newton for Taylor Series Solutions of Polynomial Homotopies in Multiple Double Precision

A polynomial homotopy is a family of polynomial systems, typically in on...
10/15/2021

Least Squares on GPUs in Multiple Double Precision

This paper describes the application of the code generated by the CAMPAR...
12/11/2020

Parallel Software to Offset the Cost of Higher Precision

Hardware double precision is often insufficient to solve large scientifi...
11/22/2020

Fresnel Integral Computation Techniques

This work is an extension of previous work by Alazah et al. [M. Alazah, ...
06/10/2017

Proposal for a High Precision Tensor Processing Unit

This whitepaper proposes the design and adoption of a new generation of ...
05/30/2017

Fast Computation of the Roots of Polynomials Over the Ring of Power Series

We give an algorithm for computing all roots of polynomials over a univa...
04/11/2022

Multiplier with Reduced Activities and Minimized Interconnect for Inner Product Arrays

We present a pipelined multiplier with reduced activities and minimized ...