An unfeasability view of neural network learning

01/04/2022
by   Joos Heintz, et al.
0

We define the notion of a continuously differentiable perfect learning algorithm for multilayer neural network architectures and show that such algorithms don't exist provided that the length of the data set exceeds the number of involved parameters and the activation functions are logistic, tanh or sin.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2021

An Analysis of State-of-the-art Activation Functions For Supervised Deep Neural Network

This paper provides an analysis of state-of-the-art activation functions...
research
07/29/2021

Otimizacao de pesos e funcoes de ativacao de redes neurais aplicadas na previsao de series temporais

Neural Networks have been applied for time series prediction with good e...
research
11/21/2019

DeepLABNet: End-to-end Learning of Deep Radial Basis Networks with Fully Learnable Basis Functions

From fully connected neural networks to convolutional neural networks, t...
research
05/31/2022

A comparative study of back propagation and its alternatives on multilayer perceptrons

The de facto algorithm for training the back pass of a feedforward neura...
research
05/30/2021

Evolution of Activation Functions: An Empirical Investigation

The hyper-parameters of a neural network are traditionally designed thro...
research
03/07/2018

Neural network feedback controller for inertial platform

The paper describes an algorithm for the synthesis of neural networks to...
research
06/24/2020

Architopes: An Architecture Modification for Composite Pattern Learning, Increased Expressiveness, and Reduced Training Time

We introduce a simple neural network architecture modification that enab...

Please sign up or login with your details

Forgot password? Click here to reset