Adversarially Slicing Generative Networks: Discriminator Slices Feature for One-Dimensional Optimal Transport
Generative adversarial networks (GANs) learn a target probability distribution by optimizing a generator and a discriminator with minimax objectives. This paper addresses the question of whether such optimization actually provides the generator with gradients that make its distribution close to the target distribution. We derive sufficient conditions for the discriminator to serve as the distance between the distributions by connecting the GAN formulation with the concept of sliced optimal transport. Furthermore, by leveraging these theoretical results, we propose a novel GAN training scheme, called adversarially slicing generative network (ASGN). With only simple modifications, the ASGN is applicable to a broad class of existing GANs. Experiments on synthetic and image datasets support our theoretical results and the ASGN's effectiveness as compared to usual GANs.
READ FULL TEXT