Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder

04/08/2023
by   Zihao Fu, et al.
0

The sequence-to-sequence (seq2seq) task aims at generating the target sequence based on the given input source sequence. Traditionally, most of the seq2seq task is resolved by the Encoder-Decoder framework which requires an encoder to encode the source sequence and a decoder to generate the target text. Recently, a bunch of new approaches have emerged that apply decoder-only language models directly to the seq2seq task. Despite the significant advancements in applying language models to the seq2seq task, there is still a lack of thorough analysis on the effectiveness of the decoder-only language model architecture. This paper aims to address this gap by conducting a detailed comparison between the encoder-decoder architecture and the decoder-only language model framework through the analysis of a regularized encoder-decoder structure. This structure is designed to replicate all behaviors in the classical decoder-only language model but has an encoder and a decoder making it easier to be compared with the classical encoder-decoder structure. Based on the analysis, we unveil the attention degeneration problem in the language model, namely, as the generation step number grows, less and less attention is focused on the source sequence. To give a quantitative understanding of this problem, we conduct a theoretical sensitivity analysis of the attention output with respect to the source input. Grounded on our analysis, we propose a novel partial attention language model to solve the attention degeneration problem. Experimental results on machine translation, summarization, and data-to-text generation tasks support our analysis and demonstrate the effectiveness of our proposed model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2022

Is Encoder-Decoder Redundant for Neural Machine Translation?

Encoder-decoder architecture is widely adopted for sequence-to-sequence ...
research
05/21/2017

Spelling Correction as a Foreign Language

In this paper, we reformulated the spell correction problem as a machine...
research
04/25/2022

ED2LM: Encoder-Decoder to Language Model for Faster Document Re-ranking Inference

State-of-the-art neural models typically encode document-query pairs usi...
research
05/24/2018

Deep Reinforcement Learning For Sequence to Sequence Models

In recent years, sequence-to-sequence (seq2seq) models are used in a var...
research
12/20/2019

A Hierarchical Model for Data-to-Text Generation

Transcribing structured data into natural language descriptions has emer...
research
11/29/2019

An Iterative Polishing Framework based on Quality Aware Masked Language Model for Chinese Poetry Generation

Owing to its unique literal and aesthetical characteristics, automatic g...
research
02/06/2023

Techniques to Improve Neural Math Word Problem Solvers

Developing automatic Math Word Problem (MWP) solvers is a challenging ta...

Please sign up or login with your details

Forgot password? Click here to reset