Adding Interpretable Attention to Neural Translation Models Improves Word Alignment

01/31/2019
by   Thomas Zenkel, et al.
0

Multi-layer models with multiple attention heads per layer provide superior translation quality compared to simpler and shallower models, but determining what source context is most relevant to each target word is more challenging as a result. Therefore, deriving high-accuracy word alignments from the activations of a state-of-the-art neural machine translation model is an open challenge. We propose a simple model extension to the Transformer architecture that makes use of its hidden representations and is restricted to attend solely on encoder information to predict the next word. It can be trained on bilingual data without word-alignment information. We further introduce a novel alignment inference procedure which applies stochastic gradient descent to directly optimize the attention activations towards a given target word. The resulting alignments dramatically outperform the naive approach to interpreting Transformer attention activations, and are comparable to Giza++ on two publicly available data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2019

Jointly Learning to Align and Translate with Transformer Models

The state of the art in machine translation (MT) is governed by neural a...
research
04/30/2020

End-to-End Neural Word Alignment Outperforms GIZA++

Word alignment was once a core unsupervised learning task in natural lan...
research
10/09/2017

What does Attention in Neural Machine Translation Pay Attention to?

Attention in neural machine translation provides the possibility to enco...
research
09/11/2018

On The Alignment Problem In Multi-Head Attention-Based Neural Machine Translation

This work investigates the alignment problem in state-of-the-art multi-h...
research
12/13/2020

Mask-Align: Self-Supervised Neural Word Alignment

Neural word alignment methods have received increasing attention recentl...
research
03/17/2022

Gaussian Multi-head Attention for Simultaneous Machine Translation

Simultaneous machine translation (SiMT) outputs translation while receiv...
research
06/25/2019

Saliency-driven Word Alignment Interpretation for Neural Machine Translation

Despite their original goal to jointly learn to align and translate, Neu...

Please sign up or login with your details

Forgot password? Click here to reset