DeepAI AI Chat
Log In Sign Up

MCTensor: A High-Precision Deep Learning Library with Multi-Component Floating-Point

07/18/2022
by   Tao Yu, et al.
0

In this paper, we introduce MCTensor, a library based on PyTorch for providing general-purpose and high-precision arithmetic for DL training. MCTensor is used in the same way as PyTorch Tensor: we implement multiple basic, matrix-level computation operators and NN modules for MCTensor with identical PyTorch interface. Our algorithms achieve high precision computation and also benefits from heavily-optimized PyTorch floating-point arithmetic. We evaluate MCTensor arithmetic against PyTorch native arithmetic for a series of tasks, where models using MCTensor in float16 would match or outperform the PyTorch model with float32 or float64 precision.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/15/2016

Customizable Precision of Floating-Point Arithmetic with Bitslice Vector Types

Customizing the precision of data can provide attractive trade-offs betw...
01/02/2022

High Precision Computation of Riemann's Zeta Function by the Riemann-Siegel Formula, II

(This is only a first preliminary version, any suggestions about it will...
05/17/2023

The Complexity of Diagonalization

We survey recent progress on efficient algorithms for approximately diag...
06/10/2017

Proposal for a High Precision Tensor Processing Unit

This whitepaper proposes the design and adoption of a new generation of ...
07/26/2022

Productivity meets Performance: Julia on A64FX

The Fujitsu A64FX ARM-based processor is used in supercomputers such as ...