Learning Unstable Dynamical Systems with Time-Weighted Logarithmic Loss

07/10/2020
by   Kamil Nar, et al.
2

When training the parameters of a linear dynamical model, the gradient descent algorithm is likely to fail to converge if the squared-error loss is used as the training loss function. Restricting the parameter space to a smaller subset and running the gradient descent algorithm within this subset can allow learning stable dynamical systems, but this strategy does not work for unstable systems. In this work, we look into the dynamics of the gradient descent algorithm and pinpoint what causes the difficulty of learning unstable systems. We show that observations taken at different times from the system to be learned influence the dynamics of the gradient descent algorithm in substantially different degrees. We introduce a time-weighted logarithmic loss function to fix this imbalance and demonstrate its effectiveness in learning unstable systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2016

Gradient Descent Converges to Minimizers

We show that gradient descent converges to a local minimizer, almost sur...
research
11/18/2020

Gradient Starvation: A Learning Proclivity in Neural Networks

We identify and formalize a fundamental gradient descent phenomenon resu...
research
03/16/2021

Learning without gradient descent encoded by the dynamics of a neurobiological model

The success of state-of-the-art machine learning is essentially all base...
research
12/06/2019

Learning to Correspond Dynamical Systems

Correspondence across dynamical systems can lend us better tools for lea...
research
09/08/2021

Constants of Motion: The Antidote to Chaos in Optimization and Game Dynamics

Several recent works in online optimization and game dynamics have estab...
research
06/17/2019

Accelerating Neural ODEs with Spectral Elements

This paper proposes the use of spectral element methods canuto_spectral_...
research
06/03/2021

Robust Learning via Persistency of Excitation

Improving adversarial robustness of neural networks remains a major chal...

Please sign up or login with your details

Forgot password? Click here to reset