DeepAI
Log In Sign Up

Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going

01/21/2019
by   Erwei Wang, et al.
0

Deep neural networks have proven to be particularly effective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardware-oriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy efficiency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-efficient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their effectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. This article represents the first survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the field.

READ FULL TEXT
03/16/2022

Hardware Approximate Techniques for Deep Neural Network Accelerators: A Survey

Deep Neural Networks (DNNs) are very popular because of their high perfo...
02/03/2018

A Survey on Acceleration of Deep Convolutional Neural Networks

Deep Neural Networks have achieved remarkable progress during the past f...
05/07/2019

Rethinking Arithmetic for Deep Neural Networks

We consider efficiency in deep neural networks. Hardware accelerators ar...
02/03/2018

Recent Advances in Efficient Computation of Deep Convolutional Neural Networks

Deep neural networks have evolved remarkably over the past few years and...
09/21/2022

Tree Methods for Hierarchical Classification in Parallel

We propose methods that enable efficient hierarchical classification in ...
09/26/2022

Going Further With Winograd Convolutions: Tap-Wise Quantization for Efficient Inference on 4x4 Tile

Most of today's computer vision pipelines are built around deep neural n...