Learning Feature Weights using Reward Modeling for Denoising Parallel Corpora

03/11/2021
by   Gaurav Kumar, et al.
0

Large web-crawled corpora represent an excellent resource for improving the performance of Neural Machine Translation (NMT) systems across several language pairs. However, since these corpora are typically extremely noisy, their use is fairly limited. Current approaches to dealing with this problem mainly focus on filtering using heuristics or single features such as language model scores or bi-lingual similarity. This work presents an alternative approach which learns weights for multiple sentence-level features. These feature weights which are optimized directly for the task of improving translation performance, are used to score and filter sentences in the noisy corpora more effectively. We provide results of applying this technique to building NMT systems using the Paracrawl corpus for Estonian-English and show that it beats strong single feature baselines and hand designed combinations. Additionally, we analyze the sensitivity of this method to different types of noise and explore if the learned weights generalize to other language pairs using the Maltese-English Paracrawl corpus.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/19/2018

NICT's Corpus Filtering Systems for the WMT18 Parallel Corpus Filtering Task

This paper presents the NICT's participation in the WMT18 shared paralle...
research
05/13/2020

Parallel Corpus Filtering via Pre-trained Language Models

Web-crawled data provides a good source of parallel corpora for training...
research
01/19/2023

Improving Machine Translation with Phrase Pair Injection and Corpus Filtering

In this paper, we show that the combination of Phrase Pair Injection and...
research
08/31/2019

Improving Back-Translation with Uncertainty-based Confidence Estimation

While back-translation is simple and effective in exploiting abundant mo...
research
07/22/2022

Multilabel Prototype Generation for Data Reduction in k-Nearest Neighbour classification

Prototype Generation (PG) methods are typically considered for improving...
research
04/30/2020

Semi-Supervised Text Simplification with Back-Translation and Asymmetric Denoising Autoencoders

Text simplification (TS) rephrases long sentences into simplified varian...
research
08/26/2019

uniblock: Scoring and Filtering Corpus with Unicode Block Information

The preprocessing pipelines in Natural Language Processing usually invol...

Please sign up or login with your details

Forgot password? Click here to reset