Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages

08/11/2022
by   Paul Soulos, et al.
9

Machine translation has seen rapid progress with the advent of Transformer-based models. These models have no explicit linguistic structure built into them, yet they may still implicitly learn structured relationships by attending to relevant tokens. We hypothesize that this structural learning could be made more robust by explicitly endowing Transformers with a structural bias, and we investigate two methods for building in such a bias. One method, the TP-Transformer, augments the traditional Transformer architecture to include an additional component to represent structure. The second method imbues structure at the data level by segmenting the data with morphological tokenization. We test these methods on translating from English into morphologically rich languages, Turkish and Inuktitut, and consider both automatic metrics and human evaluations. We find that each of these two approaches allows the network to achieve better performance, but this improvement is dependent on the size of the dataset. In sum, structural encoding methods make Transformers more sample-efficient, enabling them to perform better from smaller amounts of data.

READ FULL TEXT
research
06/04/2021

Scalable Transformers for Neural Machine Translation

Transformer has been widely adopted in Neural Machine Translation (NMT) ...
research
05/29/2023

Approximation theory of transformer networks for sequence modeling

The transformer is a widely applied architecture in sequence modeling ap...
research
08/03/2020

DeLighT: Very Deep and Light-weight Transformer

We introduce a very deep and light-weight transformer, DeLighT, that del...
research
05/31/2023

FEED PETs: Further Experimentation and Expansion on the Disambiguation of Potentially Euphemistic Terms

Transformers have been shown to work well for the task of English euphem...
research
01/06/2016

Incorporating Structural Alignment Biases into an Attentional Neural Translation Model

Neural encoder-decoder models of machine translation have achieved impre...
research
04/12/2022

What do Toothbrushes do in the Kitchen? How Transformers Think our World is Structured

Transformer-based models are now predominant in NLP. They outperform app...
research
05/03/2023

Approximating CKY with Transformers

We investigate the ability of transformer models to approximate the CKY ...

Please sign up or login with your details

Forgot password? Click here to reset