BanglaBERT: Combating Embedding Barrier for Low-Resource Language Understanding

01/01/2021
by   Abhik Bhattacharjee, et al.
0

Pre-training language models on large volume of data with self-supervised objectives has become a standard practice in natural language processing. However, most such state-of-the-art models are available in only English and other resource-rich languages. Even in multilingual models, which are trained on hundreds of languages, low-resource ones still remain underrepresented. Bangla, the seventh most widely spoken language in the world, is still low in terms of resources. Few downstream task datasets for language understanding in Bangla are publicly available, and there is a clear shortage of good quality data for pre-training. In this work, we build a Bangla natural language understanding model pre-trained on 18.6 GB data we crawled from top Bangla sites on the internet. We introduce a new downstream task dataset and benchmark on four tasks on sentence classification, document classification, natural language understanding, and sequence tagging. Our model outperforms multilingual baselines and previous state-of-the-art results by 1-6 process, we identify a major shortcoming of multilingual models that hurt performance for low-resource languages that don't share writing scripts with any high resource one, which we name the `Embedding Barrier'. We perform extensive experiments to study this barrier. We release all our datasets and pre-trained models to aid future NLP research on Bangla and other low-resource languages. Our code and data are available at https://github.com/csebuetnlp/banglabert.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2021

DziriBERT: a Pre-trained Language Model for the Algerian Dialect

Pre-trained transformers are now the de facto models in Natural Language...
research
09/13/2023

Benchmarking Procedural Language Understanding for Low-Resource Languages: A Case Study on Turkish

Understanding procedural natural language (e.g., step-by-step instructio...
research
05/23/2022

BanglaNLG: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in Bangla

This work presents BanglaNLG, a comprehensive benchmark for evaluating n...
research
07/11/2023

BLUEX: A benchmark based on Brazilian Leading Universities Entrance eXams

One common trend in recent studies of language models (LMs) is the use o...
research
09/11/2020

IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding

Although Indonesian is known to be the fourth most frequently used langu...
research
04/18/2022

Imagination-Augmented Natural Language Understanding

Human brains integrate linguistic and perceptual information simultaneou...
research
09/19/2023

NusaWrites: Constructing High-Quality Corpora for Underrepresented and Extremely Low-Resource Languages

Democratizing access to natural language processing (NLP) technology is ...

Please sign up or login with your details

Forgot password? Click here to reset