Dissipative SymODEN: Encoding Hamiltonian Dynamics with Dissipation and Control into Deep Learning

02/20/2020 ∙ by Yaofeng Desmond Zhong, et al. ∙ Princeton University 0

In this work, we introduce Dissipative SymODEN, a deep learning architecture which can infer the dynamics of a physical system with dissipation from observed state trajectories. To improve prediction accuracy while reducing network size, Dissipative SymODEN encodes the port-Hamiltonian dynamics with energy dissipation and external input into the design of its computation graph and learns the dynamics in a structured way. The learned model, by revealing key aspects of the system, such as the inertia, dissipation, and potential energy, paves the way for energy-based controllers.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Inferring systems dynamics from observed trajectories plays a critical role in identification and control of complex, physical systems, such as robotic manipulators [13] and HVAC systems [21]

. Although the use of neural networks in this context has a rich history of more than three decades

[15], recent advances in deep learning [6] have led to renewed interest in this topic [20, 10, 12, 4, 1]. Deep neural networks learn underlying patterns from data and enable generalization beyond the training set by incorporating appropriate inductive bias into the learning approach. To promote representations that are simple in some sense, inductive bias [9, 2] often manifests itself via a set of assumptions and guides a learning algorithm to pick one hypothesis over another. The success in predicting an outcome for previously unseen data depends on how well the inductive bias captures the ground reality. Inductive bias can be introduced as the prior in a Bayesian model, or via the choice of computation graphs in a neural network.

Incorporation of physics-based priors into deep learning has been a key focus in the recent times. As these approaches use neural networks to approximate system dynamics, they are more expressive than traditional system identification techniques [19]. By using a directed graph to capture the causal relationships in a physical system, [18] introduces a recurrent graph network to infer latent space dynamics in robotic systems. [14] and [8] leveraged Lagrangian mechanics to learn the dynamics of kinematic structures from discrete observations. On the other hand, [7] and [22] have utilized Hamiltonian mechanics for learning dynamics from data. However, strict enforcement of the Hamiltonian prior is restrictive for real-life systems which often loses energy in a structured way (e.g. frictional losses in robotic arms, resistive losses in power grids, etc.).

To explicitly encode dissipation as a prior into end-to-end learning, we expand the scope of the Symplectic ODE-Net (SymODEN) architecture [22] and propose Dissipative SymODEN. The underlying dynamics is motivated by the port-Hamiltonian formulation [16], which has a correction term accounting for the prior of dissipation. With this term, Dissipative SymODEN can accommodate the energy losses from various sources of dissipation present in real-life systems. Our results show that inclusion of dissipation into the physics-informed SymODEN architecture improves its prediction accuracy and out-of-sample behavior, while offering insight about relevant physical properties of the system (such as inertia matrix, potential energy, energy dissipation etc.). These insights, in turn, can enable the use of energy-based controllers, such as the method of controlled Lagrangian [3] and interconnection & damping assignment [16], which offer performance guarantees for complex, nonlinear systems.


The main contribution of this work is the introduction of a physics-informed learning architecture called Dissipative SymODEN which encodes a non-conservative physics, i.e. Hamiltonian dynamics with energy dissipation, into deep learning. This provides a means to uncover the dynamics of real-life physical systems whose Hamiltonian aspects have been adapted to external input and energy dissipation. By ensuring that the computation graph is aligned with the underlying physics, we achieve transparency, better predictions with smaller networks, and improved generalization. The architecture of Dissipative SymODEN has also been designed to accommodate angle data in the embedded form. Additionally, we use differentiable ODE solvers to avoid the need for derivative estimation.

2 The Port-Hamiltonian Dynamics

Hamiltonian dynamics is often used to systematically describe the dynamics of a physical system in the phase space , where is the generalized coordinate and is the generalized momentum. In this approach, the key to the dynamics is a scalar function , which is referred to as the Hamiltonian. In almost all physical systems, the Hamiltonian represents the total energy which can be expressed as


where is the symmetric positive definite mass/inertia matrix and represents the potential energy of the system. The equations of motion are governed by the symplectic gradient [17] of the Hamiltonian, i.e.,


Moreover, since , moving along the symplectic gradient conserves the Hamiltonian (i.e. the total energy). However, although the classical Hamiltonian dynamics ensures energy conservation, it fails to model dissipation and external inputs, which often appear in real-life systems. The port-Hamiltonian dynamics generalizes the classical Hamiltonian dynamics by explicitly modelling the total energy, dissipation and external inputs. Motivated by this formulation, we consider the following port-Hamiltonian dynamics in this work:


where the dissipation matrix is symmetric positive semi-definite and represents energy dissipation. The external input is usually affine and only affects the generalized momenta. The input matrix is assumed to have full column rank. As expected, with zero dissipation and zero input, (3) reduces to the classical Hamiltonian dynamics.

3 Dissipative Symplectic ODE-Net

3.1 Training Neural ODE with Constant Forcing

We focus on the problem of learning an ordinary differential equation (ODE) from observation data. Assume the analytical form of the right hand side (RHS) of an ODE “

” is unknown. An observation data with a constant input allows us to approximate with a neural net by leveraging


Equation (4), by matching the input and output dimensions, enables us to feed it into Neural ODE [5]. With Neural ODE, we make predictions by approximating the RHS of (4) using a neural network and feed it into an ODE solver

We can then construct the loss function

. In practice, we introduce the time horizon

as a hyperparameter and predict

from initial condition , where . The problem is then how to design the network architecture of , or equivalently .

3.2 Learning from Generalized coordinate and Momentum

Suppose we have data consisting of , where remains constant in each trajectory. We use four neural nets – , , and – as function approximators to represent the inverse of mass matrix, potential energy, the input matrix and the dissipation matrix, respectively. Thus,




The partial derivative can be taken care of by automatic differentiation. By putting the designed into Neural ODE, we obtain a systematic way of adding the prior knowledge of a structured dynamics into end-to-end learning.

3.3 Learning from Embedded Angle Data

Often, especially in robotics, the state variables involve angles residing in the interval . In other words, each angle lies on the manifold . However, generalized coordinates are typically assumed to lie on . To bridge this gap, we use an angle-aware design [22] and assume that the generalized coordinates are angles available as . Then, similar to [22], we aim to learn a structured dynamics (3) expressed in terms of , and . As , we can express this dynamics as


where “” represents the element-wise product. We assume and evolve with the structured dynamics Equation (3) and substitute Equation (3) in to the RHS of Equation (3.3). Similar to our approach in Sec 3.2, we use four neural nets to express the RHS of Equation (3.3) as . Thus, it can be fed into Equation 4 and the Neural ODE.

3.4 Learning on Hybrid Spaces

In most of physical systems, both translational coordinates and rotational coordinates coexist. In other words, the generalized coordinates lie on , where denotes the -torus. Here we put together the architecture of the previous two subsections. We assume the generalized coordinates are and the data comes in the form of . We use four neural nets – , , and – as function approximators. Then the dynamics is given by

3.5 The Dissipation Matrix and the Mass matrix

As the dissipation matrix models energy dissipation such as friction and resistance, it is positive semi-definite. We impose this constraint in the network architecture by , where is a lower-triangular matrix. In real physical systems, both the mass matrix and its inverse are positive definite. Similarly, semi-definiteness is constraint by , where is a lower-triangular matrix. The positive definiteness is ensured by adding a small constant to the diagonal elements of . It not only makes invertible, but also stabilizes training.

4 Experiments

4.1 Experimental Setup

We use the following four tasks to evaluate the performance of Dissipative SymODEN architecture – (i) Task 1: a pendulum with generalized coordinate and momentum data; (ii) Task 2: a pendulum with embedded angle data; (iii) Task 3: a CartPole system; and (iv) Task 4: an Acrobot.

Model Variants: Besides the Dissipative SymODEN model derived above, we consider a variant, called Unstructured (Unstr.) Dissipative SymODEN, which approximates the Hamiltonian by a fully connected neural net . We also consider the original SymODEN [22] as a model variant.

Baseline Models: We set up baseline models for all four experiments. For the pendulum with generalized coordinate and momentum data, the naive baseline model approximates (5) – – by a fully connected neural net. For all the other experiments, which involves embedded angle data, we set up two different baseline models: naive baseline approximates by a fully connected neural net. Also, we set up the geometric baseline model which approximates and with a fully connected neural net.

Data Generation: For all tasks, we randomly generated initial conditions of states and subsequently combined them with 5 different constant control inputs, i.e., , to produce the initial conditions and input required for simulation. The simulators integrate the corresponding dynamics for 20 time steps to generate trajectory data which is then used to construct the training set and test set.

Model training: In all the tasks, we train our model using Adam optimizer [11]

with 1000 epochs. We set a time horizon

, and choose “RK4” as the numerical integration scheme in Neural ODE. We logged the train error, test error and prediction (pred.) error per trajectory for all the tasks. Prediction error per trajectory is calculated by using the same initial state condition in the training set with a constant control of , integrating 40 time steps forward.

4.2 Task 1: Pendulum with Generalized Coordinate and Momentum Data

Figure 1: Learned functions in Task 1 (Pendulum).

In this task, we use the model described in Section 3.2 and present the predicted trajectories of the learned models as well as the learned functions of Dissipative SymODEN. The underlying dynamics is given by


with the Hamiltonian . In other words , , and . Figure 1 shows that the learned and matches the ground truth pretty well. Also, differs from the ground truth by an almost constant margin which is expected since only the derivative of impacts the dynamics. The learned dissipation matrix does not match the ground truth. We address this issue in the next subsection. In Table 1, Naive Baseline’s prediction error is the lowest because predicted trajectories reach the origin faster than the ground truth.

Figure 2:

Learned trajectories of different models. Red and black lines represent the learned and ground truth trajectories, respectively and the gray arrows show the vector fields learned by each model.

Dissipative SymODEN learns a more accurate vector field than the naive baseline model. Moreover, it appears that whereas SymODEN learns an energy-conserved vector field slightly different from the ground truth, Unstructured Dissipative SymODEN learns it completely wrong.

4.3 Task 2: Pendulum with Embedded Data

Figure 3: Learned functions in Task 2 (Pendulum with embedded data).

In this task, the dynamics is the same as Equation (8) but the training data are generated by the OpenAI Gym simulator, i.e. we use embedded angle data and assume we only have access to instead of . We use the model described in Section 3.3 to learn the structured dynamics. Without true data, the learned function matches the ground truth with a scaling , as shown in Figure 3. Please refer to [22] for explanation of the scaling. In this example, with the scaling , the learned functions match the ground truth. With the angle-aware design, we learned the dissipation matrix much better than the previous subsection.

4.4 Results

In Table 1, we show the train, test and prediction errors for all four tasks. Dissipative SymODEN performs the best in all three metrics. As SymODEN does not allow dissipation, it does not perform well in these tasks. Since Unstructured Dissipative SymODEN architecture has trouble learning a good vector field, it performs the worst in all the tasks except Task 2. In conclusion, Dissipative SymODEN achieves higher accuracy with less model parameters. Moreover, the learned model reveals physical aspects of the system, which can be leveraged by energy-based controllers.

Task Naive Baseline Geometric Baseline UnStr. Dissipative SymODEN SymODEN Dissipative SymODEN 1 #Parameters M N/A M M M Train error N/A Test error N/A Pred. error N/A 2 #Parameters M M M M M Train error Test error Pred. error 3 #Parameters M M M M M Train error Test error Pred. error 4 #Parameters M M M M M Train error Test error Pred. error

Table 1: Train, Test and Prediction Errors of Four Tasks


  • [1] I. Ayed, E. de Bézenac, A. Pajot, J. Brajard, and P. Gallinari (2019) Learning dynamical systems from partial observations. arXiv:1902.11136. Cited by: §1.
  • [2] J. Baxter (2000) A model of inductive bias learning.

    Journal of Artificial Intelligence Research

    12, pp. 149–198.
    Cited by: §1.
  • [3] A. M. Bloch, N. E. Leonard, and J. E. Marsden (2001) Controlled lagrangians and the stabilization of euler–poincaré mechanical systems. International Journal of Robust and Nonlinear Control 11 (3), pp. 191–214. Cited by: §1.
  • [4] A. Byravan and D. Fox (2017) Se3-nets: learning rigid body motion using deep neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 173–180. Cited by: §1.
  • [5] T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud (2018) Neural ordinary differential equations. In Advances in Neural Information Processing Systems 31, pp. 6571–6583. Cited by: §3.1.
  • [6] I. Goodfellow, A. Courville, and Y. Bengio (2016) Deep learning. Vol. 1, MIT Press. Cited by: §1.
  • [7] S. Greydanus, M. Dzamba, and J. Yosinski (2019) Hamiltonian Neural Networks. arXiv:1906.01563. Cited by: §1.
  • [8] J. K. Gupta, K. Menda, Z. Manchester, and M. J. Kochenderfer (2019) A general framework for structured learning of mechanical systems. arXiv:1902.08705. Cited by: §1.
  • [9] D. Haussler (1988) Quantifying inductive bias: AI learning algorithms and Valiant’s learning framework. Artificial Intelligence 36 (2), pp. 177–221. Cited by: §1.
  • [10] M. Karl, M. Soelch, J. Bayer, and P. van der Smagt (2016)

    Deep variational bayes filters: unsupervised learning of state space models from raw data

    arXiv:1605.06432. Cited by: §1.
  • [11] D. P. Kingma and J. Ba (2014) Adam: A Method for Stochastic Optimization. arXiv:1412.6980. Cited by: §4.1.
  • [12] R. G. Krishnan, U. Shalit, and D. Sontag (2017) Structured inference networks for nonlinear state space models. In Thirty-First AAAI Conference on Artificial Intelligence, Cited by: §1.
  • [13] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra (2015)

    Continuous control with deep reinforcement learning

    arXiv:1509.02971. Cited by: §1.
  • [14] M. Lutter, C. Ritter, and J. Peters (2019) Deep lagrangian networks: using physics as model prior for deep learning. In 7th International Conference on Learning Representations (ICLR), Cited by: §1.
  • [15] K. S. Narendra and K. Parthasarathy (1990) Identification and control of dynamical systems using neural networks. IEEE Transactions on Neural Networks 1 (1), pp. 4–27. Cited by: §1.
  • [16] R. Ortega, A. J. Van Der Schaft, B. Maschke, and G. Escobar (2002) Interconnection and damping assignment passivity-based control of port-controlled hamiltonian systems. Automatica 38 (4), pp. 585–596. Cited by: §1.
  • [17] D. J. Rowe, A. Ryman, and G. Rosensteel (1980) Many-body quantum mechanics as a symplectic dynamical system. Physical Review A 22 (6), pp. 2362. Cited by: §2.
  • [18] A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. Riedmiller, R. Hadsell, and P. Battaglia (2018) Graph networks as learnable physics engines for inference and control. In

    International Conference on Machine Learning (ICML)

    pp. 4467–4476. Cited by: §1.
  • [19] T. Söderström and P. Stoica (1988) System identification. Prentice-Hall, Inc.. Cited by: §1.
  • [20] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller (2015) Embed to control: a locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing 29, pp. 2746–2754. Cited by: §1.
  • [21] T. Wei, Y. Wang, and Q. Zhu (2017) Deep Reinforcement Learning for Building HVAC Control. In Proceedings of the 54th Annual Design Automation Conference (DAC), pp. 22:1–22:6. Cited by: §1.
  • [22] Y. D. Zhong, B. Dey, and A. Chakraborty (2020) Symplectic ODE-Net: Learning Hamiltonian Dynamics with Control. In International Conference on Learning Representations (ICLR), Cited by: §1, §1, §3.3, §4.1, §4.3.