ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation

05/13/2022
by   Long Phan, et al.
0

We present ViT5, a pretrained Transformer-based encoder-decoder model for the Vietnamese language. With T5-style self-supervised pretraining, ViT5 is trained on a large corpus of high-quality and diverse Vietnamese texts. We benchmark ViT5 on two downstream text generation tasks, Abstractive Text Summarization and Named Entity Recognition. Although Abstractive Text Summarization has been widely studied for the English language thanks to its rich and large source of data, there has been minimal research into the same task in Vietnamese, a much lower resource language. In this work, we perform exhaustive experiments on both Vietnamese Abstractive Summarization and Named Entity Recognition, validating the performance of ViT5 against many other pretrained Transformer-based encoder-decoder models. Our experiments show that ViT5 significantly outperforms existing models and achieves state-of-the-art results on Vietnamese Text Summarization. On the task of Named Entity Recognition, ViT5 is competitive against previous best results from pretrained encoder-based Transformer models. Further analysis shows the importance of context length during the self-supervised pretraining on downstream performance across different settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2021

VieSum: How Robust Are Transformer-based Models on Vietnamese Summarization?

Text summarization is a challenging task within natural language process...
research
07/09/2020

Advances of Transformer-Based Models for News Headline Generation

Pretrained language models based on Transformer architecture are the rea...
research
06/01/2022

MaskOCR: Text Recognition with Masked Encoder-Decoder Pretraining

In this paper, we present a model pretraining technique, named MaskOCR, ...
research
03/08/2021

Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data

Interleaved texts, where posts belonging to different threads occur in a...
research
08/12/2020

Evaluating the Impact of Knowledge Graph Context on Entity Disambiguation Models

Pretrained Transformer models have emerged as state-of-the-art approache...
research
04/25/2023

Hypernymization of named entity-rich captions for grounding-based multi-modal pretraining

Named entities are ubiquitous in text that naturally accompanies images,...
research
10/29/2019

Depa: Self-supervised audio embedding for depression detection

Depression detection research has increased over the last few decades as...

Please sign up or login with your details

Forgot password? Click here to reset