Accelerating Neural Transformer via an Average Attention Network

05/02/2018
by   Biao Zhang, et al.
0

With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/26/2021

Miti-DETR: Object Detection based on Transformers with Mitigatory Self-Attention Convergence

Object Detection with Transformers (DETR) and related works reach or eve...
research
09/05/2019

Accelerating Transformer Decoding via a Hybrid of Self-attention and Recurrent Neural Network

Due to the highly parallelizable architecture, Transformer is faster to ...
research
01/03/2021

An Efficient Transformer Decoder with Compressed Sub-layers

The large attention-based encoder-decoder network (Transformer) has beco...
research
03/25/2021

Mask Attention Networks: Rethinking and Strengthen Transformer

Transformer is an attention-based neural network, which consists of two ...
research
06/26/2019

Sharing Attention Weights for Fast Transformer

Recently, the Transformer machine translation system has shown strong re...
research
08/05/2020

Hybrid Transformer/CTC Networks for Hardware Efficient Voice Triggering

We consider the design of two-pass voice trigger detection systems. We f...
research
08/20/2021

Type Anywhere You Want: An Introduction to Invisible Mobile Keyboard

Contemporary soft keyboards possess limitations: the lack of physical fe...

Please sign up or login with your details

Forgot password? Click here to reset