A task in a suit and a tie: paraphrase generation with semantic augmentation

by   Su Wang, et al.
The University of Texas at Austin

Paraphrasing is rooted in semantics. We show the effectiveness of transformers (Vaswani et al. 2017) for paraphrase generation and further improvements by incorporating PropBank labels via a multi-encoder. Evaluating on MSCOCO and WikiAnswers, we find that transformers are fast and effective, and that semantic augmentation for both transformers and LSTMs leads to sizable 2-3 point gains in BLEU, METEOR and TER. More importantly, we find surprisingly large gains on human evaluations compared to previous models. Nevertheless, manual inspection of generated paraphrases reveals ample room for improvement: even our best model produces human-acceptable paraphrases for only 28 captions from the CHIA dataset (Sharma et al. 2018), and it fails spectacularly on sentences from Wikipedia. Overall, these results point to the potential for incorporating semantics in the task while highlighting the need for stronger evaluation.


Engaging Image Captioning Via Personality

Standard image captioning tasks such as COCO and Flickr30k are factual, ...

Lessons on Parameter Sharing across Layers in Transformers

We propose a parameter sharing method for Transformers (Vaswani et al., ...

Higher-Order Weakest Precondition Transformers via a CPS Transformation

Weakest precondition transformers are essential notions for program veri...

MAQA: A Multimodal QA Benchmark for Negation

Multimodal learning can benefit from the representation power of pretrai...

CIS2: A Simplified Commonsense Inference Evaluation for Story Prose

Transformers have been showing near-human performance on a variety of ta...

Transformers Generalize Linearly

Natural language exhibits patterns of hierarchically governed dependenci...

Please sign up or login with your details

Forgot password? Click here to reset