DeepAI AI Chat
Log In Sign Up

Explicitly Modeling Adaptive Depths for Transformer

by   Yijin Liu, et al.

The vanilla Transformer conducts a fixed number of computations over all words in a sentence, irrespective of whether they are easy or difficult to learn. In terms of both computational efficiency and ease of learning, it is preferable to dynamically vary the numbers of computations according to the hardness of the input words. However, how to find a suitable estimation for such hardness, then explicitly modeling adaptive computation depths are still not investigated. In this paper, we try to solve this issue, and propose two effective approaches, namely 1) mutual information based estimation and 2) reconstruction loss based estimation, to measure the hardness of learning the representation for a word and determine its computational depth. Results on the classic text classification task (24 datasets in various sizes and domains) show that our approaches achieve superior performance while preserving higher efficiency in computation over the vanilla Transformer and previous depth-adaptive models. More importantly, our approaches lead to more robust depth-adaptive Transformer models with better interpretability of the depth distribution.


Depth-Adaptive Graph Recurrent Network for Text Classification

The Sentence-State LSTM (S-LSTM) is a powerful and high efficient graph ...

Transformer-F: A Transformer network with effective methods for learning universal sentence representation

The Transformer model is widely used in natural language processing for ...

Depth-Adaptive Transformer

State of the art sequence-to-sequence models perform a fixed number of c...

I3D: Transformer architectures with input-dependent dynamic depth for speech recognition

Transformer-based end-to-end speech recognition has achieved great succe...

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

Transformer networks have a potential of learning longer-term dependency...

N-ODE Transformer: A Depth-Adaptive Variant of the Transformer Using Neural Ordinary Differential Equations

We use neural ordinary differential equations to formulate a variant of ...

Levenshtein Training for Word-level Quality Estimation

We propose a novel scheme to use the Levenshtein Transformer to perform ...