On the overfly algorithm in deep learning of neural networks

07/27/2018
by   Alexei Tsygvintsev, et al.
0

In this paper we investigate the supervised backpropagation training of multilayer neural networks from a dynamical systems point of view. We discuss some links with the qualitative theory of differential equations and introduce the overfly algorithm to tackle the local minima problem. Our approach is based on the existence of first integrals of the generalised gradient system with build-in dissipation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2019

Predict Globally, Correct Locally: Parallel-in-Time Optimal Control of Neural Networks

The links between optimal control of dynamical systems and neural networ...
research
11/25/2022

Neural DAEs: Constrained neural networks

In this article we investigate the effect of explicitly adding auxiliary...
research
05/18/2021

Learning stochastic dynamical systems with neural networks mimicking the Euler-Maruyama scheme

Stochastic differential equations (SDEs) are one of the most important r...
research
12/28/2021

Continuous limits of residual neural networks in case of large input data

Residual deep neural networks (ResNets) are mathematically described as ...
research
05/10/2018

Training Recurrent Neural Networks via Dynamical Trajectory-Based Optimization

This paper introduces a new method to train recurrent neural networks us...
research
10/06/2020

Optimizing Deep Neural Networks via Discretization of Finite-Time Convergent Flows

In this paper, we investigate in the context of deep neural networks, th...
research
10/27/2017

Multi-level Residual Networks from Dynamical Systems View

Deep residual networks (ResNets) and their variants are widely used in m...

Please sign up or login with your details

Forgot password? Click here to reset