A Geometric Approach of Gradient Descent Algorithms in Neural Networks

11/08/2018
by   Yacine Chitour, et al.
0

In this article we present a geometric framework to analyze convergence of gradient descent trajectories in the context of neural networks. In the case of linear networks of an arbitrary number of hidden layers, we characterize appropriate quantities which are conserved along the gradient descent system (GDS). We use them to prove boundedness of every trajectory of the GDS, which implies convergence to a critical point. We further focus on the local behavior in the neighborhood of each critical points and perform a study on the associated basin of attractions so as to measure the "possibility" of converging to saddle points and local minima.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2021

Convergence of gradient descent for learning linear neural networks

We study the convergence properties of gradient descent for training dee...
research
04/12/2021

Noether: The More Things Change, the More Stay the Same

Symmetries have proven to be important ingredients in the analysis of ne...
research
06/01/2020

Exit Time Analysis for Approximations of Gradient Descent Trajectories Around Saddle Points

This paper considers the problem of understanding the exit time for traj...
research
05/13/2018

The Global Optimization Geometry of Shallow Linear Neural Networks

We examine the squared error loss landscape of shallow linear neural net...
research
10/27/2020

Particle gradient descent model for point process generation

This paper introduces a generative model for planar point processes in a...
research
08/15/2016

A Geometric Framework for Convolutional Neural Networks

In this paper, a geometric framework for neural networks is proposed. Th...

Please sign up or login with your details

Forgot password? Click here to reset