A New Training Method for Feedforward Neural Networks Based on Geometric Contraction Property of Activation Functions

06/20/2016
by   Petre Birtea, et al.
0

We propose a new training method for a feedforward neural network having the activation functions with the geometric contraction property. The method consists of constructing a new functional that is less nonlinear in comparison with the classical functional by removing the nonlinearity of the activation functions from the output layer. We validate this new method by a series of experiments that show an improved learning speed and also a better classification error.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2021

Data-Driven Learning of Feedforward Neural Networks with Different Activation Functions

This work contributes to the development of a new data-driven method (D-...
research
02/09/2022

A Local Geometric Interpretation of Feature Extraction in Deep Feedforward Neural Networks

In this paper, we present a local geometric analysis to interpret how de...
research
09/09/2020

HSFM-Σnn: Combining a Feedforward Motion Prediction Network and Covariance Prediction

In this paper, we propose a new method for motion prediction: HSFM-Σnn. ...
research
06/25/2021

Ladder Polynomial Neural Networks

Polynomial functions have plenty of useful analytical properties, but th...
research
08/07/2022

Transmission Neural Networks: From Virus Spread Models to Neural Networks

This work connects models for virus spread on networks with their equiva...
research
08/08/2022

On Rademacher Complexity-based Generalization Bounds for Deep Learning

In this paper, we develop some novel bounds for the Rademacher complexit...
research
03/05/2022

Tabula: Efficiently Computing Nonlinear Activation Functions for Secure Neural Network Inference

Multiparty computation approaches to secure neural network inference tra...

Please sign up or login with your details

Forgot password? Click here to reset