Identifying and Controlling Important Neurons in Neural Machine Translation

11/03/2018
by   Anthony Bau, et al.
0

Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/21/2018

What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models

Despite the remarkable evolution of deep neural networks in natural lang...
11/22/2019

Neuron Interaction Based Representation Composition for Neural Machine Translation

Recent NLP studies reveal that substantial linguistic information can be...
03/27/2020

Towards Supervised and Unsupervised Neural Machine Translation Baselines for Nigerian Pidgin

Nigerian Pidgin is arguably the most widely spoken language in Nigeria. ...
10/23/2019

Controlling the Output Length of Neural Machine Translation

The recent advances introduced by neural machine translation (NMT) are r...
10/06/2021

On Neurons Invariant to Sentence Structural Changes in Neural Machine Translation

To gain insight into the role neurons play, we study the activation patt...
01/25/2019

Context in Neural Machine Translation: A Review of Models and Evaluations

This review paper discusses how context has been used in neural machine ...
12/02/2019

Merging External Bilingual Pairs into Neural Machine Translation

As neural machine translation (NMT) is not easily amenable to explicit c...