Geometry Perspective Of Estimating Learning Capability Of Neural Networks

11/03/2020
by   Ankan Dutta, et al.
0

The paper uses statistical and differential geometric motivation to acquire prior information about the learning capability of an artificial neural network on a given dataset. The paper considers a broad class of neural networks with generalized architecture performing simple least square regression with stochastic gradient descent (SGD). The system characteristics at two critical epochs in the learning trajectory are analyzed. During some epochs of the training phase, the system reaches equilibrium with the generalization capability attaining a maximum. The system can also be coherent with localized, non-equilibrium states, which is characterized by the stabilization of the Hessian matrix. The paper proves that neural networks with higher generalization capability will have a slower convergence rate. The relationship between the generalization capability with the stability of the neural network has also been discussed. By correlating the principles of high-energy physics with the learning theory of neural networks, the paper establishes a variant of the Complexity-Action conjecture from an artificial neural network perspective.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2020

Chaos and Complexity from Quantum Neural Network: A study with Diffusion Metric in Machine Learning

In this work, our prime objective is to study the phenomena of quantum c...
research
07/11/2022

On the Stochastic Gradient Descent and Inverse Variance-flatness Relation in Artificial Neural Networks

Stochastic gradient descent (SGD), a widely used algorithm in deep-learn...
research
01/03/2005

Portfolio selection using neural networks

In this paper we apply a heuristic method based on artificial neural net...
research
10/06/2003

On Interference of Signals and Generalization in Feedforward Neural Networks

This paper studies how the generalization ability of neurons can be affe...
research
10/12/2021

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks

The lottery ticket hypothesis (LTH) states that learning on a properly p...
research
11/30/2021

The Geometric Occam's Razor Implicit in Deep Learning

In over-parameterized deep neural networks there can be many possible pa...

Please sign up or login with your details

Forgot password? Click here to reset