Reduced Softmax Unit for Deep Neural Network Accelerators

12/28/2021
by   Raghuram S, et al.
0

The Softmax activation layer is a very popular Deep Neural Network (DNN) component when dealing with multi-class prediction problems. However, in DNN accelerator implementations it creates additional complexities due to the need for computation of the exponential for each of its inputs. In this brief we propose a simplified version of the activation unit for accelerators, where only a comparator unit produces the classification result, by choosing the maximum among its inputs. Due to the nature of the activation function, we show that this result is always identical to the classification produced by the Softmax layer.

READ FULL TEXT
research
03/22/2018

Deep Learning using Rectified Linear Units (ReLU)

We introduce the use of rectified linear units (ReLU) as the classificat...
research
05/28/2018

Sigsoftmax: Reanalysis of the Softmax Bottleneck

Softmax is an output activation function for modeling categorical probab...
research
09/30/2021

Introducing the DOME Activation Functions

In this paper, we introduce a novel non-linear activation function that ...
research
12/20/2021

Integral representations of shallow neural network with Rectified Power Unit activation function

In this effort, we derive a formula for the integral representation of a...
research
06/19/2020

No one-hidden-layer neural network can represent multivariable functions

In a function approximation with a neural network, an input dataset is m...
research
01/30/2019

On Correlation of Features Extracted by Deep Neural Networks

Redundancy in deep neural network (DNN) models has always been one of th...
research
02/15/2022

Unreasonable Effectiveness of Last Hidden Layer Activations

In standard Deep Neural Network (DNN) based classifiers, the general con...

Please sign up or login with your details

Forgot password? Click here to reset