
Learning Compact Neural Networks Using Ordinary Differential Equations as Activation Functions
Most deep neural networks use simple, fixed activation functions, such a...
read it

Efficient Neural Network Implementation with Quadratic Neuron
Previous works proved that the combination of the linear neuron network ...
read it

Condition Integration Memory Network: An Interpretation of the Meaning of the Neuronal Design
This document introduces a hypothesized framework on the functional natu...
read it

Nonlinear Acoustic Echo Cancellation with Deep Learning
We propose a nonlinear acoustic echo cancellation system, which aims to ...
read it

Padé Activation Units: Endtoend Learning of Flexible Activation Functions in Deep Networks
The performance of deep network learning strongly depends on the choice ...
read it

A representer theorem for deep neural networks
We propose to optimize the activation functions of a deep neural network...
read it

Stochastic Neural Networks with Monotonic Activation Functions
We propose a Laplace approximation that creates a stochastic unit from a...
read it
Differential Equation Units: Learning Functional Forms of Activation Functions from Data
Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure. We introduce differential equation units (DEUs), an improvement to modern neural networks, which enables each neuron to learn a particular nonlinear activation function from a family of solutions to an ordinary differential equation. Specifically, each neuron may change its functional form during training based on the behavior of the other parts of the network. We show that using neurons with DEU activation functions results in a more compact network capable of achieving comparable, if not superior, performance when is compared to much larger networks.
READ FULL TEXT
Comments
There are no comments yet.