Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU)

04/08/2016
by   Artem Chernodub, et al.
0

We propose a novel activation function that implements piece-wise orthogonal non-linear mappings based on permutations. It is straightforward to implement, and very computationally efficient, also it has little memory requirements. We tested it on two toy problems for feedforward and recurrent networks, it shows similar performance to tanh and ReLU. OPLU activation function ensures norm preservance of the backpropagated gradients, therefore it is potentially good for the training of deep, extra deep, and recurrent neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset