FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training

12/24/2020
by   Yonggan Fu, et al.
0

Recent breakthroughs in deep neural networks (DNNs) have fueled a tremendous demand for intelligent edge devices featuring on-site learning, while the practical realization of such systems remains a challenge due to the limited resources available at the edge and the required massive training costs for state-of-the-art (SOTA) DNNs. As reducing precision is one of the most effective knobs for boosting training time/energy efficiency, there has been a growing interest in low-precision DNN training. In this paper, we explore from an orthogonal direction: how to fractionally squeeze out more training cost savings from the most redundant bit level, progressively along the training trajectory and dynamically per input. Specifically, we propose FracTrain that integrates (i) progressive fractional quantization which gradually increases the precision of activations, weights, and gradients that will not reach the precision of SOTA static quantized DNN training until the final training stage, and (ii) dynamic fractional quantization which assigns precisions to both the activations and gradients of each layer in an input-adaptive manner, for only "fractionally" updating layer parameters. Extensive simulations and ablation studies (six models, four datasets, and three training settings including standard, adaptation, and fine-tuning) validate the effectiveness of FracTrain in reducing computational cost and hardware-quantified energy/latency of DNN training while achieving a comparable or better (-0.12 example, when training ResNet-74 on CIFAR-10, FracTrain achieves 77.6 53.5 with the best SOTA baseline, while achieving a comparable (-0.07 Our codes are available at: https://github.com/RICE-EIC/FracTrain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2021

CPT: Efficient Deep Neural Network Training via Cyclic Precision

Low-precision deep neural network (DNN) training has gained tremendous a...
research
10/14/2022

Post-Training Quantization for Energy Efficient Realization of Deep Neural Networks

The biggest challenge for the deployment of Deep Neural Networks (DNNs) ...
research
01/03/2020

Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference

While increasingly deep networks are still in general desired for achiev...
research
12/19/2021

Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning

Quantization of the weights and activations is one of the main methods t...
research
08/10/2019

Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations

This paper tackles the problem of training a deep convolutional neural n...
research
10/24/2020

ShiftAddNet: A Hardware-Inspired Deep Network

Multiplication (e.g., convolution) is arguably a cornerstone of modern d...
research
10/09/2019

QPyTorch: A Low-Precision Arithmetic Simulation Framework

Low-precision training reduces computational cost and produces efficient...

Please sign up or login with your details

Forgot password? Click here to reset