Learning from Invalid Data: On Constraint Satisfaction in Generative Models

by   Giorgio Giannone, et al.

Generative models have demonstrated impressive results in vision, language, and speech. However, even with massive datasets, they struggle with precision, generating physically invalid or factually incorrect data. This is particularly problematic when the generated data must satisfy constraints, for example, to meet product specifications in engineering design or to adhere to the laws of physics in a natural scene. To improve precision while preserving diversity and fidelity, we propose a novel training mechanism that leverages datasets of constraint-violating data points, which we consider invalid. Our approach minimizes the divergence between the generative distribution and the valid prior while maximizing the divergence with the invalid distribution. We demonstrate how generative models like GANs and DDPMs that we augment to train with invalid data vastly outperform their standard counterparts which solely train on valid data points. For example, our training procedure generates up to 98 four-fold on a stacking block problem, and improves constraint satisfaction by 15 also analyze how the quality of the invalid data affects the learning procedure and the generalization properties of models. Finally, we demonstrate significant improvements in sample efficiency, showing that a tenfold increase in valid samples leads to a negligible difference in constraint satisfaction, while less than 10 proposed mechanism offers a promising solution for improving precision in generative models while preserving diversity and fidelity, particularly in domains where constraint satisfaction is critical and data is limited, such as engineering design, robotics, and medicine.


page 22

page 23

page 25


How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating and Auditing Generative Models

Devising domain- and model-agnostic evaluation metrics for generative mo...

Towards Goal, Feasibility, and Diversity-Oriented Deep Generative Models in Design

Deep Generative Machine Learning Models (DGMs) have been growing in popu...

Smoothing the Generative Latent Space with Mixup-based Distance Learning

Producing diverse and realistic images with generative models such as GA...

Training Normalizing Flows with the Precision-Recall Divergence

Generative models can have distinct mode of failures like mode dropping ...

Precision-Recall Divergence Optimization for Generative Modeling with GANs and Normalizing Flows

Achieving a balance between image quality (precision) and diversity (rec...

Finding Average Regret Ratio Minimizing Set in Database

Selecting a certain number of data points (or records) from a database w...

Semantic Preserving Generative Adversarial Models

We introduce generative adversarial models in which the discriminator is...

Please sign up or login with your details

Forgot password? Click here to reset