LaoPLM: Pre-trained Language Models for Lao

10/12/2021
by   Nankai Lin, et al.
0

Trained on the large corpus, pre-trained language models (PLMs) can capture different levels of concepts in context and hence generate universal language representations. They can benefit multiple downstream natural language processing (NLP) tasks. Although PTMs have been widely used in most NLP applications, especially for high-resource languages such as English, it is under-represented in Lao NLP research. Previous work on Lao has been hampered by the lack of annotated datasets and the sparsity of language resources. In this work, we construct a text classification dataset to alleviate the resource-scare situation of the Lao language. We additionally present the first transformer-based PTMs for Lao with four versions: BERT-small, BERT-base, ELECTRA-small and ELECTRA-base, and evaluate it over two downstream tasks: part-of-speech tagging and text classification. Experiments demonstrate the effectiveness of our Lao models. We will release our models and datasets to the community, hoping to facilitate the future development of Lao NLP applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2020

PhoBERT: Pre-trained language models for Vietnamese

We present PhoBERT with two versions of "base" and "large"–the first pub...
research
11/02/2020

IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP

Although the Indonesian language is spoken by almost 200 million people ...
research
06/08/2023

Privacy- and Utility-Preserving NLP with Anonymized Data: A case study of Pseudonymization

This work investigates the effectiveness of different pseudonymization t...
research
06/24/2022

Text and author-level political inference using heterogeneous knowledge representations

The inference of politically-charged information from text data is a pop...
research
01/22/2023

SPEC5G: A Dataset for 5G Cellular Network Protocol Analysis

5G is the 5th generation cellular network protocol. It is the state-of-t...
research
11/16/2020

Don't Patronize Me! An Annotated Dataset with Patronizing and Condescending Language towards Vulnerable Communities

In this paper, we introduce a new annotated dataset which is aimed at su...
research
03/02/2023

Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study

Large pre-trained language models help to achieve state of the art on a ...

Please sign up or login with your details

Forgot password? Click here to reset