Relaxed Attention for Transformer Models

09/20/2022
by   Timo Lohrenz, et al.
0

The powerful modeling capabilities of all-attention-based transformer architectures often cause overfitting and - for natural language processing tasks - lead to an implicitly learned internal language model in the autoregressive transformer decoder complicating the integration of external language models. In this paper, we explore relaxed attention, a simple and easy-to-implement smoothing of the attention weights, yielding a two-fold improvement to the general transformer architecture: First, relaxed attention provides regularization when applied to the self-attention layers in the encoder. Second, we show that it naturally supports the integration of an external language model as it suppresses the implicitly learned internal language model by relaxing the cross attention in the decoder. We demonstrate the benefit of relaxed attention across several tasks with clear improvement in combination with recent benchmark approaches. Specifically, we exceed the former state-of-the-art performance of 26.90 public lip-reading LRS3 benchmark with a word error rate of 26.31 we achieve a top-performing BLEU score of 37.67 on the IWSLT14 (DE→EN) machine translation task without external language models and virtually no additional model parameters. Code and models will be made publicly available.

READ FULL TEXT
research
07/02/2021

Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition

Recently, attention-based encoder-decoder (AED) models have shown high p...
research
04/20/2019

Language Models with Transformers

The Transformer architecture is superior to RNN-based models in computat...
research
06/23/2023

Knowledge-Infused Self Attention Transformers

Transformer-based language models have achieved impressive success in va...
research
05/02/2020

Hard-Coded Gaussian Attention for Neural Machine Translation

Recent work has questioned the importance of the Transformer's multi-hea...
research
02/01/2022

Natural Language to Code Using Transformers

We tackle the problem of generating code snippets from natural language ...
research
04/15/2021

Rethinking Text Line Recognition Models

In this paper, we study the problem of text line recognition. Unlike mos...
research
02/14/2020

Transformer on a Diet

Transformer has been widely used thanks to its ability to capture sequen...

Please sign up or login with your details

Forgot password? Click here to reset