Deep Learning using Rectified Linear Units (ReLU)

03/22/2018
by   Abien Fred Agarap, et al.
0

We introduce the use of rectified linear units (ReLU) as the classification function in a deep neural network (DNN). Conventionally, ReLU is used as an activation function in DNNs, with Softmax function as their classification function. However, there have been several studies on using a classification function other than Softmax, and this study is an addition to those. We accomplish this by taking the activation of the penultimate layer h_n - 1 in a neural network, then multiply it by weight parameters θ to get the raw scores o_i. Afterwards, we threshold the raw scores o_i by 0, i.e. f(o) = (0, o_i), where f(o) is the ReLU function. We provide class predictions ŷ through argmax function, i.e. argmax f(x).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro