An Empirical Study of Using Pre-trained BERT Models for Vietnamese Relation Extraction Task at VLSP 2020

12/18/2020 ∙ by Pham Quang Nhat Minh, et al. ∙ 0

In this paper, we present an empirical study of using pre-trained BERT models for relation extraction task at VLSP 2020 Evaluation Campaign. We applied two state-of-the-art BERT-based models: R-BERT and BERT model with entity starts. For each model, we compared two pre-trained BERT models: FPTAI/vibert and NlpHUST/vibert4news. We found that NlpHUST/vibert4news model significantly outperforms FPTAI/vibert for Vietnamese relation extraction task. Finally, we proposed a simple ensemble model which combines R-BERT and BERT with entity starts. Our proposed ensemble model slightly improved against two single models on the development data provided by the task organizers.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.