ShiftAddNet: A Hardware-Inspired Deep Network

10/24/2020
by   Haoran You, et al.
0

Multiplication (e.g., convolution) is arguably a cornerstone of modern deep neural networks (DNNs). However, intensive multiplications cause expensive resource costs that challenge DNNs' deployment on resource-constrained edge devices, driving several attempts for multiplication-less deep networks. This paper presented ShiftAddNet, whose main inspiration is drawn from a common practice in energy-efficient hardware implementation, that is, multiplication can be instead performed with additions and logical bit-shifts. We leverage this idea to explicitly parameterize deep networks in this way, yielding a new type of deep network that involves only bit-shift and additive weight layers. This hardware-inspired ShiftAddNet immediately leads to both energy-efficient inference and training, without compromising the expressive capacity compared to standard DNNs. The two complementary operation types (bit-shift and add) additionally enable finer-grained control of the model's learning capacity, leading to more flexible trade-off between accuracy and (training) efficiency, as well as improved robustness to quantization and pruning. We conduct extensive experiments and ablation studies, all backed up by our FPGA-based ShiftAddNet implementation and energy measurements. Compared to existing DNNs or other multiplication-less models, ShiftAddNet aggressively reduces over 80 hardware-quantified energy cost of DNNs training and inference, while offering comparable or better accuracies. Codes and pre-trained models are available at https://github.com/RICE-EIC/ShiftAddNet.

READ FULL TEXT
research
09/20/2021

GhostShiftAddNet: More Features from Energy-Efficient Operations

Deep convolutional neural networks (CNNs) are computationally and memory...
research
05/17/2022

ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks

Neural networks (NNs) with intensive multiplications (e.g., convolutions...
research
10/24/2022

NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks

Multiplication is arguably the most cost-dominant operation in modern de...
research
10/29/2019

E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving

Convolutional neural networks (CNNs) have been increasingly deployed to ...
research
04/05/2019

FLightNNs: Lightweight Quantized Deep Neural Networks for Fast and Accurate Inference

To improve the throughput and energy efficiency of Deep Neural Networks ...
research
08/20/2022

DenseShift: Towards Accurate and Transferable Low-Bit Shift Network

Deploying deep neural networks on low-resource edge devices is challengi...
research
12/24/2020

FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training

Recent breakthroughs in deep neural networks (DNNs) have fueled a tremen...

Please sign up or login with your details

Forgot password? Click here to reset