Non-Autoregressive Translation by Learning Target Categorical Codes

03/21/2021
by   Yu Bao, et al.
0

Non-autoregressive Transformer is a promising text generation model. However, current non-autoregressive models still fall behind their autoregressive counterparts in translation quality. We attribute this accuracy gap to the lack of dependency modeling among decoder inputs. In this paper, we propose CNAT, which learns implicitly categorical codes as latent variables into the non-autoregressive decoding. The interaction among these categorical codes remedies the missing dependencies and improves the model capacity. Experiment results show that our model achieves comparable or better performance in machine translation tasks, compared with several strong baselines.

READ FULL TEXT
research
08/18/2020

Glancing Transformer for Non-Autoregressive Neural Machine Translation

Non-autoregressive neural machine translation achieves remarkable infere...
research
11/25/2019

Non-autoregressive Transformer by Position Learning

Non-autoregressive models are promising on various text generation tasks...
research
04/05/2022

latent-GLAT: Glancing at Latent Variables for Parallel Text Generation

Recently, parallel text generation has received widespread attention due...
research
05/06/2023

An Adversarial Non-Autoregressive Model for Text Generation with Incomplete Information

Non-autoregressive models have been widely studied in the Complete Infor...
research
11/02/2020

Context-Aware Cross-Attention for Non-Autoregressive Translation

Non-autoregressive translation (NAT) significantly accelerates the infer...
research
06/01/2020

Cascaded Text Generation with Markov Transformers

The two dominant approaches to neural text generation are fully autoregr...
research
12/01/2022

CUNI Non-Autoregressive System for the WMT 22 Efficient Translation Shared Task

We present a non-autoregressive system submission to the WMT 22 Efficien...

Please sign up or login with your details

Forgot password? Click here to reset