Learning Activation Functions to Improve Deep Neural Networks

12/21/2014
by   Forest Agostinelli, et al.
0

Artificial neural networks typically have a fixed, non-linear activation function at each neuron. We have designed a novel form of piecewise linear activation function that is learned independently for each neuron using gradient descent. With this adaptive activation function, we are able to improve upon deep neural network architectures composed of static rectified linear units, achieving state-of-the-art performance on CIFAR-10 (7.51 CIFAR-100 (30.83 boson decay modes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2023

Moderate Adaptive Linear Units (MoLU)

We propose a new high-performance activation function, Moderate Adaptive...
research
10/14/2018

Variational Neural Networks: Every Layer and Neuron Can Be Unique

The choice of activation function can significantly influence the perfor...
research
09/25/2019

Locally adaptive activation functions with slope recovery term for deep and physics-informed neural networks

We propose two approaches of locally adaptive activation functions namel...
research
10/06/2019

Auto-Rotating Perceptrons

This paper proposes an improved design of the perceptron unit to mitigat...
research
06/16/2020

Measuring Model Complexity of Neural Networks with Curve Activation Functions

It is fundamental to measure model complexity of deep neural networks. T...
research
07/17/2018

Learning Neuron Non-Linearities with Kernel-Based Deep Neural Networks

The effectiveness of deep neural architectures has been widely supported...
research
11/21/2019

DeepLABNet: End-to-end Learning of Deep Radial Basis Networks with Fully Learnable Basis Functions

From fully connected neural networks to convolutional neural networks, t...

Please sign up or login with your details

Forgot password? Click here to reset