Goldilocks Neural Networks

02/11/2020
by   Jan Rosenzweig, et al.
0

We introduce the new "Goldilocks" class of activation functions, which non-linearly deform the input signal only locally when the input signal is in the appropriate range. The small local deformation of the signal enables better understanding of how and why the signal is transformed through the layers. Numerical results on CIFAR-10 and CIFAR-100 data sets show that Goldilocks networks perform comparably to SELU and RELU, while introducing tractability of data deformation through the layers.

READ FULL TEXT

page 20

page 21

research
12/18/2021

Deeper Learning with CoLU Activation

In neural networks, non-linearity is introduced by activation functions....
research
05/12/2023

Saturated Non-Monotonic Activation Functions

Activation functions are essential to deep learning networks. Popular an...
research
09/29/2021

A Comprehensive Survey and Performance Analysis of Activation Functions in Deep Learning

Neural networks have shown tremendous growth in recent years to solve nu...
research
04/12/2018

Learned Deformation Stability in Convolutional Neural Networks

Conventional wisdom holds that interleaved pooling layers in convolution...
research
01/17/2020

Approximating Activation Functions

ReLU is widely seen as the default choice for activation functions in ne...
research
07/24/2020

Deforming the Loss Surface

In deep learning, it is usually assumed that the shape of the loss surfa...
research
01/21/2021

Characterizing signal propagation to close the performance gap in unnormalized ResNets

Batch Normalization is a key component in almost all state-of-the-art im...

Please sign up or login with your details

Forgot password? Click here to reset