Denoising Pre-Training and Data Augmentation Strategies for Enhanced RDF Verbalization with Transformers

12/01/2020
by   Sebastien Montella, et al.
0

The task of verbalization of RDF triples has known a growth in popularity due to the rising ubiquity of Knowledge Bases (KBs). The formalism of RDF triples is a simple and efficient way to store facts at a large scale. However, its abstract representation makes it difficult for humans to interpret. For this purpose, the WebNLG challenge aims at promoting automated RDF-to-text generation. We propose to leverage pre-trainings from augmented data with the Transformer model using a data augmentation strategy. Our experiment results show a minimum relative increases of 3.73 for seen categories, unseen entities and unseen categories respectively over the standard training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2021

Data Augmentation for Text Generation Without Any Augmented Data

Data augmentation is an effective way to improve the performance of many...
research
12/22/2022

GENIE: Large Scale Pre-training for Text Generation with Diffusion Model

In this paper, we propose a large-scale language pre-training for text G...
research
09/13/2019

Sequence-to-sequence Pre-training with Data Augmentation for Sentence Rewriting

We study sequence-to-sequence (seq2seq) pre-training with data augmentat...
research
07/27/2023

Pre-training Vision Transformers with Very Limited Synthesized Images

Formula-driven supervised learning (FDSL) is a pre-training method that ...
research
07/14/2020

Deep Transformer based Data Augmentation with Subword Units for Morphologically Rich Online ASR

Recently Deep Transformer models have proven to be particularly powerful...

Please sign up or login with your details

Forgot password? Click here to reset