Towards Scalable Koopman Operator Learning: Convergence Rates and A Distributed Learning Algorithm

09/30/2019
by   Zhiyuan Liu, et al.
0

In this paper, we propose an alternating optimization algorithm to the nonconvex Koopman operator learning problem for nonlinear dynamic systems. We show that the proposed algorithm will converge to a critical point with rate O(1/T) or O(1/ T) under some mild assumptions. To handle the high dimensional nonlinear dynamical systems, we present the first-ever distributed Koopman operator learning algorithm. We show that the distributed Koopman operator learning has the same convergence properties as a centralized Koopman operator learning problem, in the absence of optimal tracker, so long as the basis functions satisfy a set of state-based decomposition conditions. Experiments are provided to complement our theoretical results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset