GhostShiftAddNet: More Features from Energy-Efficient Operations

09/20/2021
by   Jia Bi, et al.
0

Deep convolutional neural networks (CNNs) are computationally and memory intensive. In CNNs, intensive multiplication can have resource implications that may challenge the ability for effective deployment of inference on resource-constrained edge devices. This paper proposes GhostShiftAddNet, where the motivation is to implement a hardware-efficient deep network: a multiplication-free CNN with fewer redundant features. We introduce a new bottleneck block, GhostSA, that converts all multiplications in the block to cheap operations. The bottleneck uses an appropriate number of bit-shift filters to process intrinsic feature maps, then applies a series of transformations that consist of bit-wise shifts with addition operations to generate more feature maps that fully learn to capture information underlying intrinsic features. We schedule the number of bit-shift and addition operations for different hardware platforms. We conduct extensive experiments and ablation studies with desktop and embedded (Jetson Nano) devices for implementation and measurements. We demonstrate the proposed GhostSA block can replace bottleneck blocks in the backbone of state-of-the-art networks architectures and gives improved performance on image classification benchmarks. Further, our GhostShiftAddNet can achieve higher classification accuracy with fewer FLOPs and parameters (reduced by up to 3x) than GhostNet. When compared to GhostNet, inference latency on the Jetson Nano is improved by 1.3x and 2x on the GPU and CPU respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/10/2022

GhostNets on Heterogeneous Devices via Cheap Operations

Deploying convolutional neural networks (CNNs) on mobile devices is diff...
research
10/24/2020

ShiftAddNet: A Hardware-Inspired Deep Network

Multiplication (e.g., convolution) is arguably a cornerstone of modern d...
research
05/09/2023

DietCNN: Multiplication-free Inference for Quantized CNNs

The rising demand for networked embedded systems with machine intelligen...
research
03/13/2019

All You Need is a Few Shifts: Designing Efficient Convolutional Neural Networks for Image Classification

Shift operation is an efficient alternative over depthwise separable con...
research
05/17/2022

ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks

Neural networks (NNs) with intensive multiplications (e.g., convolutions...
research
02/09/2023

DeepCAM: A Fully CAM-based Inference Accelerator with Variable Hash Lengths for Energy-efficient Deep Neural Networks

With ever increasing depth and width in deep neural networks to achieve ...
research
01/23/2021

MinConvNets: A new class of multiplication-less Neural Networks

Convolutional Neural Networks have achieved unprecedented success in ima...

Please sign up or login with your details

Forgot password? Click here to reset