Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder

06/15/2020
by   Daya Guo, et al.
0

Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs. Existing works usually ignore the context that is not explicitly provided, resulting in a context-independent semantic representation that struggles to support the generation. To address this, we propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts. Our approach works in an encoder-decoder manner and is equipped with a Vector Quantised-Variational Autoencoder, where the encoder outputs representations from a distribution over discrete variables. Such discrete representations enable automatically selecting relevant evidence, which not only facilitates evidence-aware generation, but also provides a natural way to uncover rationales behind the generation. Our approach provides state-of-the-art performance on both Event2Mind and ATOMIC datasets. More importantly, we find that with discrete representations, our model selectively uses evidence to generate different inferential texts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2022

Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors

Generating high quality texts with high diversity is important for many ...
research
09/19/2019

Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder

Understanding event and event-centered commonsense reasoning are crucial...
research
10/09/2020

Event Representation with Sequential, Semi-Supervised Discrete Variables

Within the context of event modeling and understanding, we propose a new...
research
02/08/2017

A Hybrid Convolutional Variational Autoencoder for Text Generation

In this paper we explore the effect of architectural choices on learning...
research
09/25/2020

Hierarchical Sparse Variational Autoencoder for Text Encoding

In this paper we focus on unsupervised representation learning and propo...
research
07/12/2021

CatVRNN: Generating Category Texts via Multi-task Learning

Controlling the model to generate texts of different categories is a cha...
research
09/15/2022

Graph-to-Text Generation with Dynamic Structure Pruning

Most graph-to-text works are built on the encoder-decoder framework with...

Please sign up or login with your details

Forgot password? Click here to reset