Lifted Neural Networks

05/03/2018
by   Armin Askari, et al.
0

We describe a novel family of models of multi- layer feedforward neural networks in which the activation functions are encoded via penalties in the training problem. Our approach is based on representing a non-decreasing activation function as the argmin of an appropriate convex optimiza- tion problem. The new framework allows for algo- rithms such as block-coordinate descent methods to be applied, in which each step is composed of a simple (no hidden layer) supervised learning problem that is parallelizable across data points and/or layers. Experiments indicate that the pro- posed models provide excellent initial guesses for weights for standard neural networks. In addi- tion, the model provides avenues for interesting extensions, such as robustness against noisy in- puts and optimizing over parameters in activation functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2021

Data-Driven Learning of Feedforward Neural Networks with Different Activation Functions

This work contributes to the development of a new data-driven method (D-...
research
09/21/2021

Neural networks with trainable matrix activation functions

The training process of neural networks usually optimize weights and bia...
research
11/20/2018

Fenchel Lifted Networks: A Lagrange Relaxation of Neural Network Training

Despite the recent successes of deep neural networks, the corresponding ...
research
07/14/2017

On the Complexity of Learning Neural Networks

The stunning empirical successes of neural networks currently lack rigor...
research
06/03/2016

Dense Associative Memory for Pattern Recognition

A model of associative memory is studied, which stores and reliably retr...
research
06/25/2021

Ladder Polynomial Neural Networks

Polynomial functions have plenty of useful analytical properties, but th...

Please sign up or login with your details

Forgot password? Click here to reset