Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning

12/29/2020
by   Xuebo Liu, et al.
0

Encoder layer fusion (EncoderFusion) is a technique to fuse all the encoder layers (instead of the uppermost layer) for sequence-to-sequence (Seq2Seq) models, which has proven effective on various NLP tasks. However, it is still not entirely clear why and when EncoderFusion should work. In this paper, our main contribution is to take a step further in understanding EncoderFusion. Many of previous studies believe that the success of EncoderFusion comes from exploiting surface and syntactic information embedded in lower encoder layers. Unlike them, we find that the encoder embedding layer is more important than other intermediate encoder layers. In addition, the uppermost decoder layer consistently pays more attention to the encoder embedding layer across NLP tasks. Based on this observation, we propose a simple fusion method, SurfaceFusion, by fusing only the encoder embedding layer for the softmax layer. Experimental results show that SurfaceFusion outperforms EncoderFusion on several NLP benchmarks, including machine translation, text summarization, and grammatical error correction. It obtains the state-of-the-art performance on WMT16 Romanian-English and WMT14 English-French translation tasks. Extensive analyses reveal that SurfaceFusion learns more expressive bilingual word embeddings by building a closer relationship between relevant source and target embeddings. The source code will be released.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2020

Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning

In sequence-to-sequence learning, the attention mechanism has been a gre...
research
04/16/2022

BLISS: Robust Sequence-to-Sequence Learning via Self-Supervised Input Representation

Data augmentations (DA) are the cores to achieving robust sequence-to-se...
research
11/16/2019

Understanding and Improving Layer Normalization

Layer normalization (LayerNorm) is a technique to normalize the distribu...
research
07/20/2023

Layer-wise Representation Fusion for Compositional Generalization

Despite successes across a broad range of applications, sequence-to-sequ...
research
03/21/2020

Analyzing Word Translation of Transformer Layers

The Transformer translation model is popular for its effective paralleli...
research
05/20/2023

Learn to Compose Syntactic and Semantic Representations Appropriately for Compositional Generalization

Recent studies have shown that sequence-to-sequence (Seq2Seq) models are...
research
01/31/2020

Pseudo-Bidirectional Decoding for Local Sequence Transduction

Local sequence transduction (LST) tasks are sequence transduction tasks ...

Please sign up or login with your details

Forgot password? Click here to reset