Flex-SFU: Accelerating DNN Activation Functions by Non-Uniform Piecewise Approximation

05/08/2023
by   Enrico Reggiani, et al.
0

Modern DNN workloads increasingly rely on activation functions consisting of computationally complex operations. This poses a challenge to current accelerators optimized for convolutions and matrix-matrix multiplications. This work presents Flex-SFU, a lightweight hardware accelerator for activation functions implementing non-uniform piecewise interpolation supporting multiple data formats. Non-Uniform segments and floating-point numbers are enabled by implementing a binary-tree comparison within the address decoding unit. An SGD-based optimization algorithm with heuristics is proposed to find the interpolation function reducing the mean squared error. Thanks to non-uniform interpolation and floating-point support, Flex-SFU achieves on average 22.3x better mean squared error compared to previous piecewise linear interpolation approaches. The evaluation with more than 700 computer vision and natural language processing models shows that Flex-SFU can, on average, improve the end-to-end performance of state-of-the-art AI hardware accelerators by 35.7 achieving up to 3.3x speedup with negligible impact in the models' accuracy when using 32 segments, and only introducing an area and power overhead of 5.9 and 0.8

READ FULL TEXT

page 1

page 3

research
02/22/2023

Non-Uniform Interpolation in Integrated Gradients for Low-Latency Explainable-AI

There has been a surge in Explainable-AI (XAI) methods that provide insi...
research
01/24/2023

PowerQuant: Automorphism Search for Non-Uniform Quantization

Deep neural networks (DNNs) are nowadays ubiquitous in many domains such...
research
09/17/2019

K-TanH: Hardware Efficient Activations For Deep Learning

We propose K-TanH, a novel, highly accurate, hardware efficient approxim...
research
11/27/2018

Efficient non-uniform quantizer for quantized neural network targeting reconfigurable hardware

Convolutional Neural Networks (CNN) has become more popular choice for v...
research
07/13/2020

Hardware Implementation of Hyperbolic Tangent Function using Catmull-Rom Spline Interpolation

Deep neural networks yield the state of the art results in many computer...
research
12/22/2022

Training Integer-Only Deep Recurrent Neural Networks

Recurrent neural networks (RNN) are the backbone of many text and speech...
research
08/16/2021

Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision

Convolutional neural networks (CNNs) have become an established part of ...

Please sign up or login with your details

Forgot password? Click here to reset