Optimizing Performance of Feedforward and Convolutional Neural Networks through Dynamic Activation Functions

08/10/2023
by   Chinmay Rane, et al.
0

Deep learning training training algorithms are a huge success in recent years in many fields including speech, text,image video etc. Deeper and deeper layers are proposed with huge success with resnet structures having around 152 layers. Shallow convolution neural networks(CNN's) are still an active research, where some phenomena are still unexplained. Activation functions used in the network are of utmost importance, as they provide non linearity to the networks. Relu's are the most commonly used activation function.We show a complex piece-wise linear(PWL) activation in the hidden layer. We show that these PWL activations work much better than relu activations in our networks for convolution neural networks and multilayer perceptrons. Result comparison in PyTorch for shallow and deep CNNs are given to further strengthen our case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2021

Deeper Learning with CoLU Activation

In neural networks, non-linearity is introduced by activation functions....
research
05/31/2022

A comparative study of back propagation and its alternatives on multilayer perceptrons

The de facto algorithm for training the back pass of a feedforward neura...
research
11/09/2017

Feed Forward and Backward Run in Deep Convolution Neural Network

Convolution Neural Networks (CNN), known as ConvNets are widely used in ...
research
01/28/2021

Reducing ReLU Count for Privacy-Preserving CNN Speedup

Privacy-Preserving Machine Learning algorithms must balance classificati...
research
03/06/2023

On the existence of optimal shallow feedforward networks with ReLU activation

We prove existence of global minima in the loss landscape for the approx...
research
07/28/2022

PEA: Improving the Performance of ReLU Networks for Free by Using Progressive Ensemble Activations

In recent years novel activation functions have been proposed to improve...
research
10/16/2019

Hidden Unit Specialization in Layered Neural Networks: ReLU vs. Sigmoidal Activation

We study layered neural networks of rectified linear units (ReLU) in a m...

Please sign up or login with your details

Forgot password? Click here to reset