On model selection and the disability of neural networks to decompose tasks

02/19/2002
by   Marc Toussaint, et al.
0

A neural network with fixed topology can be regarded as a parametrization of functions, which decides on the correlations between functional variations when parameters are adapted. We propose an analysis, based on a differential geometry point of view, that allows to calculate these correlations. In practise, this describes how one response is unlearned while another is trained. Concerning conventional feed-forward neural networks we find that they generically introduce strong correlations, are predisposed to forgetting, and inappropriate for task decomposition. Perspectives to solve these problems are discussed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2016

Geometric Decomposition of Feed Forward Neural Networks

There have been several attempts to mathematically understand neural net...
research
02/10/2022

Decomposing neural networks as mappings of correlation functions

Understanding the functional principles of information processing in dee...
research
05/16/2022

An Artificial Neural Network Functionalized by Evolution

The topology of artificial neural networks has a significant effect on t...
research
02/03/2010

Using CODEQ to Train Feed-forward Neural Networks

CODEQ is a new, population-based meta-heuristic algorithm that is a hybr...
research
10/28/2021

Deep Calibration of Interest Rates Model

For any financial institution it is a necessity to be able to apprehend ...
research
03/01/2017

Understanding Synthetic Gradients and Decoupled Neural Interfaces

When training neural networks, the use of Synthetic Gradients (SG) allow...
research
07/07/2023

Parametrised polyconvex hyperelasticity with physics-augmented neural networks

In the present work, neural networks are applied to formulate parametris...

Please sign up or login with your details

Forgot password? Click here to reset