PhoBERT: Pre-trained language models for Vietnamese

03/02/2020
by   Dat Quoc Nguyen, et al.
0

We present PhoBERT with two versions of "base" and "large"–the first public large-scale monolingual language models pre-trained for Vietnamese. We show that PhoBERT improves the state-of-the-art in multiple Vietnamese-specific NLP tasks including Part-of-speech tagging, Named-entity recognition and Natural language inference. We release PhoBERT to facilitate future research and downstream applications for Vietnamese NLP. Our PhoBERT is released at: https://github.com/VinAIResearch/PhoBERT

READ FULL TEXT

page 1

page 2

page 3

research
05/20/2020

BERTweet: A pre-trained language model for English Tweets

We present BERTweet, the first public large-scale pre-trained language m...
research
09/20/2021

BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese

We present BARTpho with two versions – BARTpho_word and BARTpho_syllable...
research
01/25/2023

ViDeBERTa: A powerful pre-trained language model for Vietnamese

This paper presents ViDeBERTa, a new pre-trained monolingual language mo...
research
01/04/2018

VnCoreNLP: A Vietnamese Natural Language Processing Toolkit

We present an easy-to-use and fast toolkit, namely VnCoreNLP---a Java NL...
research
10/12/2021

LaoPLM: Pre-trained Language Models for Lao

Trained on the large corpus, pre-trained language models (PLMs) can capt...
research
01/28/2023

On Pre-trained Language Models for Antibody

Antibodies are vital proteins offering robust protection for the human b...
research
05/20/2021

KLUE: Korean Language Understanding Evaluation

We introduce Korean Language Understanding Evaluation (KLUE) benchmark. ...

Please sign up or login with your details

Forgot password? Click here to reset