A task in a suit and a tie: paraphrase generation with semantic augmentation

10/31/2018
by   Su Wang, et al.
0

Paraphrasing is rooted in semantics. We show the effectiveness of transformers (Vaswani et al. 2017) for paraphrase generation and further improvements by incorporating PropBank labels via a multi-encoder. Evaluating on MSCOCO and WikiAnswers, we find that transformers are fast and effective, and that semantic augmentation for both transformers and LSTMs leads to sizable 2-3 point gains in BLEU, METEOR and TER. More importantly, we find surprisingly large gains on human evaluations compared to previous models. Nevertheless, manual inspection of generated paraphrases reveals ample room for improvement: even our best model produces human-acceptable paraphrases for only 28 captions from the CHIA dataset (Sharma et al. 2018), and it fails spectacularly on sentences from Wikipedia. Overall, these results point to the potential for incorporating semantics in the task while highlighting the need for stronger evaluation.

READ FULL TEXT
research
10/25/2018

Engaging Image Captioning Via Personality

Standard image captioning tasks such as COCO and Flickr30k are factual, ...
research
04/13/2021

Lessons on Parameter Sharing across Layers in Transformers

We propose a parameter sharing method for Transformers (Vaswani et al., ...
research
01/24/2023

Higher-Order Weakest Precondition Transformers via a CPS Transformation

Weakest precondition transformers are essential notions for program veri...
research
01/09/2023

MAQA: A Multimodal QA Benchmark for Negation

Multimodal learning can benefit from the representation power of pretrai...
research
02/16/2022

CIS2: A Simplified Commonsense Inference Evaluation for Story Prose

Transformers have been showing near-human performance on a variety of ta...
research
09/24/2021

Transformers Generalize Linearly

Natural language exhibits patterns of hierarchically governed dependenci...

Please sign up or login with your details

Forgot password? Click here to reset