Data Scaling Laws in NMT: The Effect of Noise and Architecture

02/04/2022
by   Yamini Bansal, et al.
4

In this work, we study the effect of varying the architecture and training data quality on the data scaling properties of Neural Machine Translation (NMT). First, we establish that the test loss of encoder-decoder transformer models scales as a power law in the number of training samples, with a dependence on the model size. Then, we systematically vary aspects of the training setup to understand how they impact the data scaling laws. In particular, we change the following (1) Architecture and task setup: We compare to a transformer-LSTM hybrid, and a decoder-only transformer with a language modeling loss (2) Noise level in the training distribution: We experiment with filtering, and adding iid synthetic noise. In all the above cases, we find that the data scaling exponents are minimally impacted, suggesting that marginally worse architectures or training data can be compensated for by adding more data. Lastly, we find that using back-translated data instead of parallel data, can significantly degrade the scaling exponent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/16/2021

Scaling Laws for Neural Machine Translation

We present an empirical study of scaling properties of encoder-decoder T...
research
10/21/2022

Is Encoder-Decoder Redundant for Neural Machine Translation?

Encoder-decoder architecture is widely adopted for sequence-to-sequence ...
research
10/06/2020

On the Sub-Layer Functionalities of Transformer Decoder

There have been significant efforts to interpret the encoder of Transfor...
research
09/13/2022

Revisiting Neural Scaling Laws in Language and Vision

The remarkable progress in deep learning in recent years is largely driv...
research
01/31/2023

Scaling laws for single-agent reinforcement learning

Recent work has shown that, in generative modeling, cross-entropy loss i...
research
12/22/2021

Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models

Deep encoders have been proven to be effective in improving neural machi...
research
06/15/2021

Warning Signs for Non-Markovian Bifurcations: Color Blindness and Scaling Laws

Warning signs for tipping points (or critical transitions) have been ver...

Please sign up or login with your details

Forgot password? Click here to reset