OnsagerNet: Learning Stable and Interpretable Dynamics using a Generalized Onsager Principle
We propose a systematic method for learning stable and interpretable dynamical models using sampled trajectory data from physical processes based on a generalized Onsager principle. The learned dynamics are autonomous ordinary differential equations parameterized by neural networks that retain clear physical structure information, such as free energy, diffusion, conservative motion and external force. The neural network representations for the hidden dynamics are trained by minimizing the loss between sample data and predictions using multiple steps of Runge-Kutta methods. For high dimensional problems with a low dimensional slow manifold, an autoencoder with metric preserving regularization is introduced to find the low dimensional generalized coordinates on which we learn the generalized Onsager dynamics. Our method exhibits clear advantages in two benchmark problems for learning ordinary differential equations: nonlinear Langevin dynamics and the Lorenz system. We further apply this method to study Rayleigh-Bénard convection and learn Lorenz-like low dimensional autonomous reduced order models that capture both qualitative and quantitative properties of the underlying dynamics.
READ FULL TEXT