-
Enforcing Statistical Constraints in Generative Adversarial Networks for Modeling Chaotic Dynamical Systems
Simulating complex physical systems often involves solving partial diffe...
read it
-
Inverse Estimation of Elastic Modulus Using Physics-Informed Generative Adversarial Networks
While standard generative adversarial networks (GANs) rely solely on tra...
read it
-
Physics-informed deep generative models
We consider the application of deep generative models in propagating unc...
read it
-
Generative Adversarial Networks
We propose a new framework for estimating generative models via an adver...
read it
-
On Unifying Deep Generative Models
Deep generative models have achieved impressive success in recent years....
read it
-
Analyzing and Improving Generative Adversarial Training for Generative Modeling and Out-of-Distribution Detection
Generative adversarial training (GAT) is a recently introduced adversari...
read it
-
Generative adversarial training of product of policies for robust and adaptive movement primitives
In learning from demonstrations, many generative models of trajectories ...
read it
Encoding Invariances in Deep Generative Models
Reliable training of generative adversarial networks (GANs) typically require massive datasets in order to model complicated distributions. However, in several applications, training samples obey invariances that are a priori known; for example, in complex physics simulations, the training data obey universal laws encoded as well-defined mathematical equations. In this paper, we propose a new generative modeling approach, InvNet, that can efficiently model data spaces with known invariances. We devise an adversarial training algorithm to encode them into data distribution. We validate our framework in three experimental settings: generating images with fixed motifs; solving nonlinear partial differential equations (PDEs); and reconstructing two-phase microstructures with desired statistical properties. We complement our experiments with several theoretical results.
READ FULL TEXT
Comments
There are no comments yet.