Discriminative Adversarial Search for Abstractive Summarization

02/24/2020
by   Thomas Scialom, et al.
0

We introduce a novel approach for sequence decoding, Discriminative Adversarial Search (DAS), which has the desirable properties of alleviating the effects of exposure bias without requiring external metrics. Inspired by Generative Adversarial Networks (GANs), wherein a discriminator is used to improve the generator, our method differs from GANs in that the generator parameters are not updated at training time and the discriminator is only used to drive sequence generation at inference time. We investigate the effectiveness of the proposed approach on the task of Abstractive Summarization: the results obtained show that a naive application of DAS improves over the state-of-the-art methods, with further gains obtained via discriminator retraining. Moreover, we show how DAS can be effective for cross-domain adaptation. Finally, all results reported are obtained without additional rule-based filtering strategies, commonly used by the best performing systems available: this indicates that DAS can effectively be deployed without relying on post-hoc modifications of the generated outputs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2021

Training Generative Adversarial Networks in One Stage

Generative Adversarial Networks (GANs) have demonstrated unprecedented s...
research
01/30/2019

Diversity Regularized Adversarial Learning

The two key players in Generative Adversarial Networks (GANs), the discr...
research
01/28/2022

Generative Cooperative Networks for Natural Language Generation

Generative Adversarial Networks (GANs) have known a tremendous success f...
research
08/19/2020

Direct Adversarial Training for GANs

There is an interesting discovery that several neural networks are vulne...
research
08/26/2021

Re-using Adversarial Mask Discriminators for Test-time Training under Distribution Shifts

Thanks to their ability to learn flexible data-driven losses, Generative...
research
09/03/2019

Adversarial Bootstrapping for Dialogue Model Training

Open domain neural dialogue models, despite their successes, are known t...
research
09/15/2019

Learning Rhyming Constraints using Structured Adversaries

Existing recurrent neural language models often fail to capture higher-l...

Please sign up or login with your details

Forgot password? Click here to reset