A Letter on Convergence of In-Parameter-Linear Nonlinear Neural Architectures with Gradient Learnings

11/25/2021
by   Ivo Bukovsky, et al.
0

This letter summarizes and proves the concept of bounded-input bounded-state (BIBS) stability for weight convergence of a broad family of in-parameter-linear nonlinear neural architectures as it generally applies to a broad family of incremental gradient learning algorithms. A practical BIBS convergence condition results from the derived proofs for every individual learning point or batches for real-time applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/06/2023

Algorithmic properties of QK4.3 and QS4.3

We prove that predicate modal logics QK4.3 and QS4.3 are undecidable in ...
research
11/15/2021

Global Convergence of Hessenberg Shifted QR I: Dynamics

Rapid convergence of the shifted QR algorithm on symmetric matrices was ...
research
12/04/2019

A Unified Switching System Perspective and O.D.E. Analysis of Q-Learning Algorithms

In this paper, we introduce a unified framework for analyzing a large fa...
research
12/31/2019

A frequency-domain analysis of inexact gradient descent

We study robustness properties of inexact gradient descent for strongly ...
research
09/09/2018

Stochastic Gradient Descent Learns State Equations with Nonlinear Activations

We study discrete time dynamical systems governed by the state equation ...
research
03/17/2017

Implicit Gradient Neural Networks with a Positive-Definite Mass Matrix for Online Linear Equations Solving

Motivated by the advantages achieved by implicit analogue net for solvin...
research
12/05/2017

Online Learning with Gated Linear Networks

This paper describes a family of probabilistic architectures designed fo...

Please sign up or login with your details

Forgot password? Click here to reset