Invertible Tabular GANs: Killing Two Birds with OneStone for Tabular Data Synthesis

02/08/2022
by   Jaehoon Lee, et al.
0

Tabular data synthesis has received wide attention in the literature. This is because available data is often limited, incomplete, or cannot be obtained easily, and data privacy is becoming increasingly important. In this work, we present a generalized GAN framework for tabular synthesis, which combines the adversarial training of GANs and the negative log-density regularization of invertible neural networks. The proposed framework can be used for two distinctive objectives. First, we can further improve the synthesis quality, by decreasing the negative log-density of real records in the process of adversarial training. On the other hand, by increasing the negative log-density of real records, realistic fake records can be synthesized in a way that they are not too much close to real records and reduce the chance of potential information leakage. We conduct experiments with real-world datasets for classification, regression, and privacy attacks. In general, the proposed method demonstrates the best synthesis quality (in terms of task-oriented evaluation metrics, e.g., F1) when decreasing the negative log-density during the adversarial training. If increasing the negative log-density, our experimental results show that the distance between real and fake records increases, enhancing robustness against privacy attacks.

READ FULL TEXT

page 2

page 12

research
12/28/2019

Alleviation for Gradient Exploding in GANs: Fake Can Be Real

In order to alleviate the notorious mode collapse phenomenon in generati...
research
07/27/2018

From Adversarial Training to Generative Adversarial Networks

In this paper, we are interested in two seemingly different concepts: ad...
research
06/16/2021

Compound Frechet Inception Distance for Quality Assessment of GAN Created Images

Generative adversarial networks or GANs are a type of generative modelin...
research
12/25/2020

Robustness, Privacy, and Generalization of Adversarial Training

Adversarial training can considerably robustify deep neural networks to ...
research
12/01/2021

Improving GAN Equilibrium by Raising Spatial Awareness

The success of Generative Adversarial Networks (GANs) is largely built u...
research
06/08/2019

Convergence of Dümbgen's Algorithm for Estimation of Tail Inflation

Given a density f on the non-negative real line, Dümbgen's algorithm is ...

Please sign up or login with your details

Forgot password? Click here to reset