Low- and Mixed-Precision Inference Accelerators

06/24/2022
by   Maarten Molendijk, et al.
0

With the surging popularity of edge computing, the need to efficiently perform neural network inference on battery-constrained IoT devices has greatly increased. While algorithmic developments enable neural networks to solve increasingly more complex tasks, the deployment of these networks on edge devices can be problematic due to the stringent energy, latency, and memory requirements. One way to alleviate these requirements is by heavily quantizing the neural network, i.e. lowering the precision of the operands. By taking quantization to the extreme, e.g. by using binary values, new opportunities arise to increase the energy efficiency. Several hardware accelerators exploiting the opportunities of low-precision inference have been created, all aiming at enabling neural network inference at the edge. In this chapter, design choices and their implications on the flexibility and energy efficiency of several accelerators supporting extremely quantized networks are reviewed.

READ FULL TEXT

page 13

page 15

page 17

page 19

page 21

research
02/01/2019

Efficient Hybrid Network Architectures for Extremely Quantized Neural Networks Enabling Intelligence at the Edge

The recent advent of `Internet of Things' (IOT) has increased the demand...
research
02/27/2021

ProbLP: A framework for low-precision probabilistic inference

Bayesian reasoning is a powerful mechanism for probabilistic inference i...
research
11/21/2022

BrainTTA: A 35 fJ/op Compiler Programmable Mixed-Precision Transport-Triggered NN SoC

Recently, accelerators for extremely quantized deep neural network (DNN)...
research
02/08/2021

Enabling Binary Neural Network Training on the Edge

The ever-growing computational demands of increasingly complex machine l...
research
02/20/2021

An Evaluation of Edge TPU Accelerators for Convolutional Neural Networks

Edge TPUs are a domain of accelerators for low-power, edge devices and a...
research
08/10/2021

Survey and Benchmarking of Precision-Scalable MAC Arrays for Embedded DNN Processing

Reduced-precision and variable-precision multiply-accumulate (MAC) opera...
research
04/27/2023

Moccasin: Efficient Tensor Rematerialization for Neural Networks

The deployment and training of neural networks on edge computing devices...

Please sign up or login with your details

Forgot password? Click here to reset