Importance-Aware Learning for Neural Headline Editing

11/25/2019
by   Qingyang Wu, et al.
0

Many social media news writers are not professionally trained. Therefore, social media platforms have to hire professional editors to adjust amateur headlines to attract more readers. We propose to automate this headline editing process through neural network models to provide more immediate writing support for these social media news writers. To train such a neural headline editing model, we collected a dataset which contains articles with original headlines and professionally edited headlines. However, it is expensive to collect a large number of professionally edited headlines. To solve this low-resource problem, we design an encoder-decoder model which leverages large scale pre-trained language models. We further improve the pre-trained model's quality by introducing a headline generation task as an intermediate task before the headline editing task. Also, we propose Self Importance-Aware (SIA) loss to address the different levels of editing in the dataset by down-weighting the importance of easily classified tokens and sentences. With the help of Pre-training, Adaptation, and SIA, the model learns to generate headlines in the professional editor's style. Experimental results show that our method significantly improves the quality of headline editing comparing against previous methods.

READ FULL TEXT
research
09/17/2020

Understanding Effects of Editing Tweets for News Sharing by Media Accounts through a Causal Inference Framework

To reach a broader audience and optimize traffic toward news articles, m...
research
10/19/2022

Self-supervised Graph Masking Pre-training for Graph-to-Text Generation

Large-scale pre-trained language models (PLMs) have advanced Graph-to-Te...
research
09/14/2023

An Interactive Framework for Profiling News Media Sources

The recent rise of social media has led to the spread of large amounts o...
research
09/02/2021

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation

Pre-trained models for Natural Languages (NL) like BERT and GPT have bee...
research
09/16/2021

MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection

Much of natural language processing is focused on leveraging large capac...
research
04/06/2023

Investigating Chain-of-thought with ChatGPT for Stance Detection on Social Media

Stance detection predicts attitudes towards targets in texts and has gai...
research
08/13/2020

Cognitive Representation Learning of Self-Media Online Article Quality

The automatic quality assessment of self-media online articles is an urg...

Please sign up or login with your details

Forgot password? Click here to reset