Explicitly Modeling Adaptive Depths for Transformer

04/27/2020
by   Yijin Liu, et al.
0

The vanilla Transformer conducts a fixed number of computations over all words in a sentence, irrespective of whether they are easy or difficult to learn. In terms of both computational efficiency and ease of learning, it is preferable to dynamically vary the numbers of computations according to the hardness of the input words. However, how to find a suitable estimation for such hardness, then explicitly modeling adaptive computation depths are still not investigated. In this paper, we try to solve this issue, and propose two effective approaches, namely 1) mutual information based estimation and 2) reconstruction loss based estimation, to measure the hardness of learning the representation for a word and determine its computational depth. Results on the classic text classification task (24 datasets in various sizes and domains) show that our approaches achieve superior performance while preserving higher efficiency in computation over the vanilla Transformer and previous depth-adaptive models. More importantly, our approaches lead to more robust depth-adaptive Transformer models with better interpretability of the depth distribution.

READ FULL TEXT
research
02/29/2020

Depth-Adaptive Graph Recurrent Network for Text Classification

The Sentence-State LSTM (S-LSTM) is a powerful and high efficient graph ...
research
07/02/2021

Transformer-F: A Transformer network with effective methods for learning universal sentence representation

The Transformer model is widely used in natural language processing for ...
research
10/22/2019

Depth-Adaptive Transformer

State of the art sequence-to-sequence models perform a fixed number of c...
research
03/14/2023

I3D: Transformer architectures with input-dependent dynamic depth for speech recognition

Transformer-based end-to-end speech recognition has achieved great succe...
research
10/22/2020

N-ODE Transformer: A Depth-Adaptive Variant of the Transformer Using Neural Ordinary Differential Equations

We use neural ordinary differential equations to formulate a variant of ...
research
09/12/2021

Levenshtein Training for Word-level Quality Estimation

We propose a novel scheme to use the Levenshtein Transformer to perform ...

Please sign up or login with your details

Forgot password? Click here to reset