Optimizing transformer-based machine translation model for single GPU training: a hyperparameter ablation study

08/11/2023
by   Luv Verma, et al.
0

In machine translation tasks, the relationship between model complexity and performance is often presumed to be linear, driving an increase in the number of parameters and consequent demands for computational resources like multiple GPUs. To explore this assumption, this study systematically investigates the effects of hyperparameters through ablation on a sequence-to-sequence machine translation pipeline, utilizing a single NVIDIA A100 GPU. Contrary to expectations, our experiments reveal that combinations with the most parameters were not necessarily the most effective. This unexpected insight prompted a careful reduction in parameter sizes, uncovering "sweet spots" that enable training sophisticated models on a single GPU without compromising translation quality. The findings demonstrate an intricate relationship between hyperparameter selection, model size, and computational resource needs. The insights from this study contribute to the ongoing efforts to make machine translation more accessible and cost-effective, emphasizing the importance of precise hyperparameter tuning over mere scaling.

READ FULL TEXT

page 3

page 4

page 6

page 7

page 8

page 9

page 10

research
10/01/2019

Auto-Sizing the Transformer Network: Improving Speed, Efficiency, and Performance for Low-Resource Machine Translation

Neural sequence-to-sequence models, particularly the Transformer, are th...
research
06/01/2018

Scaling Neural Machine Translation

Sequence to sequence learning models still require several days to reach...
research
05/05/2018

Exploring Hyper-Parameter Optimization for Neural Machine Translation on GPU Architectures

Neural machine translation (NMT) has been accelerated by deep learning n...
research
10/22/2020

Not all parameters are born equal: Attention is mostly what you need

Transformers are widely used in state-of-the-art machine translation, bu...
research
09/19/2020

Towards Computational Linguistics in Minangkabau Language: Studies on Sentiment Analysis and Machine Translation

Although some linguists (Rusmali et al., 1985; Crouch, 2009) have fairly...
research
09/10/2022

Simple and Effective Gradient-Based Tuning of Sequence-to-Sequence Models

Recent trends towards training ever-larger language models have substant...
research
08/09/2023

Vector quantization loss analysis in VQGANs: a single-GPU ablation study for image-to-image synthesis

This study performs an ablation analysis of Vector Quantized Generative ...

Please sign up or login with your details

Forgot password? Click here to reset