AdderNet and its Minimalist Hardware Design for Energy-Efficient Artificial Intelligence

01/25/2021
by   Yunhe Wang, et al.
35

Convolutional neural networks (CNN) have been widely used for boosting the performance of many machine intelligence tasks. However, the CNN models are usually computationally intensive and energy consuming, since they are often designed with numerous multiply-operations and considerable parameters for the accuracy reason. Thus, it is difficult to directly apply them in the resource-constrained environments such as 'Internet of Things' (IoT) devices and smart phones. To reduce the computational complexity and energy burden, here we present a novel minimalist hardware architecture using adder convolutional neural network (AdderNet), in which the original convolution is replaced by adder kernel using only additions. To maximally excavate the potential energy consumption, we explore the low-bit quantization algorithm for AdderNet with shared-scaling-factor method, and we design both specific and general-purpose hardware accelerators for AdderNet. Experimental results show that the adder kernel with int8/int16 quantization also exhibits high performance, meanwhile consuming much less resources (theoretically  81 In addition, we deploy the quantized AdderNet on FPGA (Field Programmable Gate Array) platform. The whole AdderNet can practically achieve 16 speed, 67.6 decrease in power consumption compared to CNN under the same circuit architecture. With a comprehensive comparison on the performance, power consumption, hardware resource consumption and network generalization capability, we conclude the AdderNet is able to surpass all the other competitors including the classical CNN, novel memristor-network, XNOR-Net and the shift-kernel based network, indicating its great potential in future high performance and energy-efficient artificial intelligence applications.

READ FULL TEXT

page 6

page 8

research
05/21/2018

Streaming MANN: A Streaming-Based Inference for Energy-Efficient Memory-Augmented Neural Networks

With the successful development of artificial intelligence using deep le...
research
01/30/2021

NL-CNN: A Resources-Constrained Deep Learning Model based on Nonlinear Convolution

A novel convolution neural network model, abbreviated NL-CNN is proposed...
research
09/30/2022

Energy Efficient Hardware Acceleration of Neural Networks with Power-of-Two Quantisation

Deep neural networks virtually dominate the domain of most modern vision...
research
01/24/2017

Variability-Aware Design for Energy Efficient Computational Artificial Intelligence Platform

Portable computing devices, which include tablets, smart phones and vari...
research
05/12/2021

Winograd Algorithm for AdderNet

Adder neural network (AdderNet) is a new kind of deep model that replace...
research
06/03/2021

Multiplierless MP-Kernel Machine For Energy-efficient Edge Devices

We present a novel framework for designing multiplierless kernel machine...
research
10/31/2018

Convolutional Neural Network Quantization using Generalized Gamma Distribution

As edge applications using convolutional neural networks (CNN) models gr...

Please sign up or login with your details

Forgot password? Click here to reset