Bitext Mining for Low-Resource Languages via Contrastive Learning

08/23/2022
by   Weiting Tan, et al.
0

Mining high-quality bitexts for low-resource languages is challenging. This paper shows that sentence representation of language models fine-tuned with multiple negatives ranking loss, a contrastive objective, helps retrieve clean bitexts. Experiments show that parallel data mined from our approach substantially outperform the previous state-of-the-art method on low resource languages Khmer and Pashto.

READ FULL TEXT
research
08/19/2023

Knowledge Transfer from High-Resource to Low-Resource Programming Languages for Code LLMs

Over the past few years, Large Language Models of Code (Code LLMs) have ...
research
05/02/2023

Contrastive Speech Mixup for Low-resource Keyword Spotting

Most of the existing neural-based models for keyword spotting (KWS) in s...
research
03/15/2022

Better Quality Estimation for Low Resource Corpus Mining

Quality Estimation (QE) models have the potential to change how we evalu...
research
01/27/2021

Mining Large-Scale Low-Resource Pronunciation Data From Wikipedia

Pronunciation modeling is a key task for building speech technology in n...
research
05/03/2023

evaluating bert and parsbert for analyzing persian advertisement data

This paper discusses the impact of the Internet on modern trading and th...
research
12/20/2022

CoCo: Coherence-Enhanced Machine-Generated Text Detection Under Data Limitation With Contrastive Learning

Machine-Generated Text (MGT) detection, a task that discriminates MGT fr...
research
07/14/2022

Multilinguals at SemEval-2022 Task 11: Complex NER in Semantically Ambiguous Settings for Low Resource Languages

We leverage pre-trained language models to solve the task of complex NER...

Please sign up or login with your details

Forgot password? Click here to reset