DeepAI AI Chat
Log In Sign Up

Proposal for a High Precision Tensor Processing Unit

by   Eric B. Olsen, et al.

This whitepaper proposes the design and adoption of a new generation of Tensor Processing Unit which has the performance of Google's TPU, yet performs operations on wide precision data. The new generation TPU is made possible by implementing arithmetic circuits which compute using a new general purpose, fractional arithmetic based on the residue number system.


page 4

page 10

page 11

page 14


MCTensor: A High-Precision Deep Learning Library with Multi-Component Floating-Point

In this paper, we introduce MCTensor, a library based on PyTorch for pro...

Introduction of the Residue Number Arithmetic Logic Unit With Brief Computational Complexity Analysis

Digital System Research has pioneered the mathematics and design for a n...

lrsarith: a small fixed/hybrid arithmetic C library

We describe lrsarith which is a small fixed precision and hybrid arithme...

New satellites of figure-eight orbit computed with high precision

In this paper we use a Modified Newton's method based on the Continuous ...

Accelerated Polynomial Evaluation and Differentiation at Power Series in Multiple Double Precision

The problem is to evaluate a polynomial in several variables and its gra...

Iterative Krylov Methods for Acoustic Problems on Graphics Processing Unit

This paper deals with linear algebra operations on Graphics Processing U...