Graph network for simultaneous learning of forward and inverse physics
In this work, we propose an end-to-end graph network that learns forward and inverse models of particle-based physics using interpretable inductive biases. Physics-informed neural networks are often engineered to solve specific problems through problem-specific regularization and loss functions. Such explicit learning biases the network to learn data specific patterns and may require a change in the loss function or neural network architecture hereby limiting their generalizabiliy. While recent studies have proposed graph networks to study forward dynamics, they rely on particle specific parameters such as mass, etc. to approximate the dynamics of the system. Our graph network is implicitly biased by learning to solve several tasks, thereby sharing representations between tasks in order to learn the forward dynamics as well as infer the probability distribution of unknown particle specific properties. We evaluate our approach on one-step next state prediction tasks across diverse datasets that feature different particle interactions. Our comparison against related data-driven physics learning approaches reveals that our model is able to predict the forward dynamics with at least an order of magnitude higher accuracy. We also show that our approach is able to recover multi-modal probability distributions of unknown physical parameters using orders of magnitude fewer samples.
READ FULL TEXT