Quality of syntactic implication of RL-based sentence summarization

12/11/2019
by   Hoa T. Le, et al.
0

Work on summarization has explored both reinforcement learning (RL) optimization using ROUGE as a reward and syntax-aware models, such as models those input is enriched with part-of-speech (POS)-tags and dependency information. However, it is not clear what is the respective impact of these approaches beyond the standard ROUGE evaluation metric. Especially, RL-based for summarization is becoming more and more popular. In this paper, we provide a detailed comparison of these two approaches and of their combination along several dimensions that relate to the perceived quality of the generated summaries: number of repeated words, distribution of part-of-speech tags, impact of sentence length, relevance and grammaticality. Using the standard Gigaword sentence summarization task, we compare an RL self-critical sequence training (SCST) method with syntax-aware models that leverage POS tags and Dependency information. We show that on all qualitative evaluations, the combined model gives the best results, but also that only training with RL and without any syntactic information already gives nearly as good results as syntax-aware models with less parameters and faster training convergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2019

Answers Unite! Unsupervised Metrics for Reinforced Summarization Models

Abstractive summarization approaches based on Reinforcement Learning (RL...
research
03/12/2019

Syntax-aware Neural Semantic Role Labeling with Supertags

We introduce a new syntax-aware model for dependency-based semantic role...
research
05/11/2017

A Deep Reinforced Model for Abstractive Summarization

Attentional, RNN-based encoder-decoder models for abstractive summarizat...
research
06/14/2018

Structure-Infused Copy Mechanisms for Abstractive Summarization

Seq2seq learning has produced promising results on summarization. Howeve...
research
04/25/2022

SyntaSpeech: Syntax-Aware Generative Adversarial Text-to-Speech

The recent progress in non-autoregressive text-to-speech (NAR-TTS) has m...
research
03/26/2023

SASS: Data and Methods for Subject Aware Sentence Simplification

Sentence simplification tends to focus on the generic simplification of ...
research
09/29/2021

Who says like a style of Vitamin: Towards Syntax-Aware DialogueSummarization using Multi-task Learning

Abstractive dialogue summarization is a challenging task for several rea...

Please sign up or login with your details

Forgot password? Click here to reset