Deep Transformers with Latent Depth

09/28/2020
by   Xian Li, et al.
0

The Transformer model has achieved state-of-the-art performance in many sequence modeling tasks. However, how to leverage model capacity with large or variable depths is still an open challenge. We present a probabilistic framework to automatically learn which layer(s) to use by learning the posterior distributions of layer selection. As an extension of this framework, we propose a novel method to train one shared Transformer network for multilingual machine translation with different layer selection posteriors for each language pair. The proposed method alleviates the vanishing gradient issue and enables stable training of deep Transformers (e.g. 100 layers). We evaluate on WMT English-German machine translation and masked language modeling tasks, where our method outperforms existing approaches for training deeper Transformers. Experiments on multilingual machine translation demonstrate that this approach can effectively leverage increased model capacity and bring universal improvement for both many-to-one and one-to-many translation with diverse language pairs.

READ FULL TEXT
research
09/17/2021

Back-translation for Large-Scale Multilingual Machine Translation

This paper illustrates our approach to the shared task on large-scale mu...
research
11/23/2022

TorchScale: Transformers at Scale

Large Transformers have achieved state-of-the-art performance across man...
research
12/30/2020

Reservoir Transformer

We demonstrate that transformers obtain impressive performance even when...
research
02/21/2020

Accessing Higher-level Representations in Sequential Transformers with Feedback Memory

Transformers are feedforward networks that can process input tokens in p...
research
01/30/2023

Alternating Updates for Efficient Transformers

It is well established that increasing scale in deep transformer network...
research
03/01/2022

DeepNet: Scaling Transformers to 1,000 Layers

In this paper, we propose a simple yet effective method to stabilize ext...
research
03/11/2023

Stabilizing Transformer Training by Preventing Attention Entropy Collapse

Training stability is of great importance to Transformers. In this work,...

Please sign up or login with your details

Forgot password? Click here to reset