Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU)

04/08/2016
by   Artem Chernodub, et al.
0

We propose a novel activation function that implements piece-wise orthogonal non-linear mappings based on permutations. It is straightforward to implement, and very computationally efficient, also it has little memory requirements. We tested it on two toy problems for feedforward and recurrent networks, it shows similar performance to tanh and ReLU. OPLU activation function ensures norm preservance of the backpropagated gradients, therefore it is potentially good for the training of deep, extra deep, and recurrent neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2022

APTx: better activation function than MISH, SWISH, and ReLU's variants used in deep learning

Activation Functions introduce non-linearity in the deep neural networks...
research
09/03/2018

PLU: The Piecewise Linear Unit Activation Function

Successive linear transforms followed by nonlinear "activation" function...
research
11/06/2020

Parametric Flatten-T Swish: An Adaptive Non-linear Activation Function For Deep Learning

Activation function is a key component in deep learning that performs no...
research
08/21/2021

SERF: Towards better training of deep neural networks using log-Softplus ERror activation Function

Activation functions play a pivotal role in determining the training dyn...
research
09/28/2020

A thermodynamically consistent chemical spiking neuron capable of autonomous Hebbian learning

We propose a fully autonomous, thermodynamically consistent set of chemi...
research
11/07/2013

Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks

In this paper we propose and investigate a novel nonlinear unit, called ...
research
06/24/2019

Variations on the Chebyshev-Lagrange Activation Function

We seek to improve the data efficiency of neural networks and present no...

Please sign up or login with your details

Forgot password? Click here to reset