Layer-Wise Multi-View Learning for Neural Machine Translation

11/03/2020
by   Qiang Wang, et al.
0

Traditional neural machine translation is limited to the topmost encoder layer's context representation and cannot directly perceive the lower encoder layers. Existing solutions usually rely on the adjustment of network architecture, making the calculation more complicated or introducing additional structural restrictions. In this work, we propose layer-wise multi-view learning to solve this problem, circumventing the necessity to change the model structure. We regard each encoder layer's off-the-shelf output, a by-product in layer-by-layer encoding, as the redundant view for the input sentence. In this way, in addition to the topmost encoder layer (referred to as the primary view), we also incorporate an intermediate encoder layer as the auxiliary view. We feed the two views to a partially shared decoder to maintain independent prediction. Consistency regularization based on KL divergence is used to encourage the two views to learn from each other. Extensive experimental results on five translation tasks show that our approach yields stable improvements over multiple strong baselines. As another bonus, our method is agnostic to network architectures and can maintain the same inference speed as the original model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2018

Exploiting Deep Representations for Neural Machine Translation

Advanced neural machine translation (NMT) models generally implement enc...
research
05/16/2020

Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning

In sequence-to-sequence learning, the attention mechanism has been a gre...
research
06/04/2019

Exploiting Sentential Context for Neural Machine Translation

In this work, we present novel approaches to exploit sentential context ...
research
08/27/2019

Multi-Layer Softmaxing during Training Neural Machine Translation for Flexible Decoding with Fewer Layers

This paper proposes a novel procedure for training an encoder-decoder ba...
research
02/16/2020

Multi-layer Representation Fusion for Neural Machine Translation

Neural machine translation systems require a number of stacked layers fo...
research
08/19/2021

MvSR-NAT: Multi-view Subset Regularization for Non-Autoregressive Machine Translation

Conditional masked language models (CMLM) have shown impressive progress...

Please sign up or login with your details

Forgot password? Click here to reset