Low-Resource Neural Headline Generation

07/31/2017
by   Ottokar Tilk, et al.
0

Recent neural headline generation models have shown great results, but are generally trained on very large datasets. We focus our efforts on improving headline quality on smaller datasets by the means of pretraining. We propose new methods that enable pre-training all the parameters of the model and utilize all available text, resulting in improvements by up to 32.4 in perplexity and 2.84 points in ROUGE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2022

Self-supervised Graph Masking Pre-training for Graph-to-Text Generation

Large-scale pre-trained language models (PLMs) have advanced Graph-to-Te...
research
05/31/2023

Strategies for improving low resource speech to text translation relying on pre-trained ASR models

This paper presents techniques and findings for improving the performanc...
research
11/11/2021

Improving Large-scale Language Models and Resources for Filipino

In this paper, we improve on existing language resources for the low-res...
research
06/28/2022

Bottleneck Low-rank Transformers for Low-resource Spoken Language Understanding

End-to-end spoken language understanding (SLU) systems benefit from pret...
research
05/05/2023

Augmenting Low-Resource Text Classification with Graph-Grounded Pre-training and Prompting

Text classification is a fundamental problem in information retrieval wi...
research
11/10/2018

Dual Latent Variable Model for Low-Resource Natural Language Generation in Dialogue Systems

Recent deep learning models have shown improving results to natural lang...
research
05/01/2020

Structured Tuning for Semantic Role Labeling

Recent neural network-driven semantic role labeling (SRL) systems have s...

Please sign up or login with your details

Forgot password? Click here to reset