Why GANs are overkill for NLP

05/19/2022
by   David Alvarez-Melis, et al.
13

This work offers a novel theoretical perspective on why, despite numerous attempts, adversarial approaches to generative modeling (e.g., GANs) have not been as popular for certain generation tasks, particularly sequential tasks such as Natural Language Generation, as they have in others, such as Computer Vision. In particular, on sequential data such as text, maximum-likelihood approaches are significantly more utilized than GANs. We show that, while it may seem that maximizing likelihood is inherently different than minimizing distinguishability, this distinction is largely artificial and only holds for limited models. We argue that minimizing KL-divergence (i.e., maximizing likelihood) is a more efficient approach to effectively minimizing the same distinguishability criteria that adversarial models seek to optimize. Reductions show that minimizing distinguishability can be seen as simply boosting likelihood for certain families of models including n-gram models and neural networks with a softmax output layer. To achieve a full polynomial-time reduction, a novel next-token distinguishability model is considered.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2018

Language Modeling with Generative AdversarialNetworks

Generative Adversarial Networks (GANs) have been promising in the field ...
research
05/23/2019

Training language GANs from Scratch

Generative Adversarial Networks (GANs) enjoy great success at image gene...
research
01/28/2022

Generative Cooperative Networks for Natural Language Generation

Generative Adversarial Networks (GANs) have known a tremendous success f...
research
02/27/2021

A Brief Introduction to Generative Models

We introduce and motivate generative modeling as a central task for mach...
research
10/09/2018

Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs

Building on the success of deep learning, two modern approaches to learn...
research
05/25/2019

Triple-to-Text: Converting RDF Triples into High-Quality Natural Languages via Optimizing an Inverse KL Divergence

Knowledge base is one of the main forms to represent information in a st...
research
05/16/2020

A Text Reassembling Approach to NaturalLanguage Generation

Recent years have seen a number of proposals for performing Natural Lang...

Please sign up or login with your details

Forgot password? Click here to reset