DeepAI AI Chat
Log In Sign Up

QPyTorch: A Low-Precision Arithmetic Simulation Framework

10/09/2019
by   Tianyi Zhang, et al.
cornell university
0

Low-precision training reduces computational cost and produces efficient models. Recent research in developing new low-precision training algorithms often relies on simulation to empirically evaluate the statistical effects of quantization while avoiding the substantial overhead of building specific hardware. To support this empirical research, we introduce QPyTorch, a low-precision arithmetic simulation framework. Built natively in PyTorch, QPyTorch provides a convenient interface that minimizes the efforts needed to reliably convert existing codes to study low-precision training. QPyTorch is general, and supports a variety of combinations of precisions, number formats, and rounding options. Additionally, it leverages an efficient fused-kernel approach to reduce simulator overhead, which enables simulation of large-scale, realistic problems. QPyTorch is publicly available at https://github.com/Tiiiger/QPyTorch.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/30/2022

FBM: Fast-Bit Allocation for Mixed-Precision Quantization

Quantized neural networks are well known for reducing latency, power con...
03/01/2018

WRPN & Apprentice: Methods for Training and Inference using Low-Precision Numerics

Today's high performance deep learning architectures involve large model...
01/27/2021

Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators

In this paper, we propose a mixed-precision convolution unit architectur...
09/16/2021

OMPQ: Orthogonal Mixed Precision Quantization

To bridge the ever increasing gap between deep neural networks' complexi...
02/27/2021

ProbLP: A framework for low-precision probabilistic inference

Bayesian reasoning is a powerful mechanism for probabilistic inference i...
12/24/2020

FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training

Recent breakthroughs in deep neural networks (DNNs) have fueled a tremen...
06/17/2021

How Low Can We Go: Trading Memory for Error in Low-Precision Training

Low-precision arithmetic trains deep learning models using less energy, ...