Variations on the Chebyshev-Lagrange Activation Function

06/24/2019
by   Yuchen Li, et al.
3

We seek to improve the data efficiency of neural networks and present novel implementations of parameterized piece-wise polynomial activation functions. The parameters are the y-coordinates of n+1 Chebyshev nodes per hidden unit and Lagrangian interpolation between the nodes produces the polynomial on [-1, 1]. We show results for different methods of handling inputs outside [-1, 1] on synthetic datasets, finding significant improvements in capacity of expression and accuracy of interpolation in models that compute some form of linear extrapolation from either ends. We demonstrate competitive or state-of-the-art performance on the classification of images (MNIST and CIFAR-10) and minimally-correlated vectors (DementiaBank) when we replace ReLU or tanh with linearly extrapolated Chebyshev-Lagrange activations in deep residual architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2019

Natural-Logarithm-Rectified Activation Function in Convolutional Neural Networks

Activation functions play a key role in providing remarkable performance...
research
11/11/2020

On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks

Outsourcing neural network inference tasks to an untrusted cloud raises ...
research
07/26/2020

Regularized Flexible Activation Function Combinations for Deep Neural Networks

Activation in deep neural networks is fundamental to achieving non-linea...
research
10/27/2019

L*ReLU: Piece-wise Linear Activation Functions for Deep Fine-grained Visual Categorization

Deep neural networks paved the way for significant improvements in image...
research
03/29/2021

Restricted Boltzmann Machines as Models of Interacting Variables

We study the type of distributions that Restricted Boltzmann Machines (R...
research
04/08/2016

Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU)

We propose a novel activation function that implements piece-wise orthog...
research
01/18/2021

Learning DNN networks using un-rectifying ReLU with compressed sensing application

The un-rectifying technique expresses a non-linear point-wise activation...

Please sign up or login with your details

Forgot password? Click here to reset