Gradient Descent Learns Linear Dynamical Systems

09/16/2016
by   Moritz Hardt, et al.
0

We prove that gradient descent efficiently converges to the global optimizer of the maximum likelihood objective of an unknown linear time-invariant dynamical system from a sequence of noisy observations generated by the system. Even though the objective function is non-convex, we provide polynomial running time and sample complexity bounds under strong but natural assumptions. Linear systems identification has been studied for many decades, yet, to the best of our knowledge, these are the first polynomial guarantees for the problem we consider.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2016

Gradient Descent Converges to Minimizers

We show that gradient descent converges to a local minimizer, almost sur...
research
02/23/2022

Globally Convergent Policy Search over Dynamic Filters for Output Estimation

We introduce the first direct policy search algorithm which provably con...
research
02/23/2021

Online Stochastic Gradient Descent Learns Linear Dynamical Systems from A Single Trajectory

This work investigates the problem of estimating the weight matrices of ...
research
10/02/2021

Learning Networked Linear Dynamical Systems under Non-white Excitation from a Single Trajectory

We consider a networked linear dynamical system with p agents/nodes. We ...
research
03/01/2021

Learners' languages

In "Backprop as functor", the authors show that the fundamental elements...
research
12/20/2021

Adversarially Robust Stability Certificates can be Sample-Efficient

Motivated by bridging the simulation to reality gap in the context of sa...
research
07/24/2019

Sparse Optimization on Measures with Over-parameterized Gradient Descent

Minimizing a convex function of a measure with a sparsity-inducing penal...

Please sign up or login with your details

Forgot password? Click here to reset