DeepAI AI Chat
Log In Sign Up

Learning Optimal Policy for Simultaneous Machine Translation via Binary Search

by   Shoutao Guo, et al.

Simultaneous machine translation (SiMT) starts to output translation while reading the source sentence and needs a precise policy to decide when to output the generated translation. Therefore, the policy determines the number of source tokens read during the translation of each target token. However, it is difficult to learn a precise translation policy to achieve good latency-quality trade-offs, because there is no golden policy corresponding to parallel sentences as explicit supervision. In this paper, we present a new method for constructing the optimal policy online via binary search. By employing explicit supervision, our approach enables the SiMT model to learn the optimal policy, which can guide the model in completing the translation during inference. Experiments on four translation tasks show that our method can exceed strong baselines across all latency scenarios.


Turning Fixed to Adaptive: Integrating Post-Evaluation into Simultaneous Machine Translation

Simultaneous machine translation (SiMT) starts its translation before re...

LEAPT: Learning Adaptive Prefix-to-prefix Translation For Simultaneous Machine Translation

Simultaneous machine translation, which aims at a real-time translation,...

Monotonic Infinite Lookback Attention for Simultaneous Machine Translation

Simultaneous machine translation begins to translate each source sentenc...

Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy

Simultaneous machine translation (SiMT) generates translation before rea...

Future-Guided Incremental Transformer for Simultaneous Translation

Simultaneous translation (ST) starts translations synchronously while re...

Would Friedman Burn your Tokens?

Cryptocurrencies come with a variety of tokenomic policies as well as as...

Efficient Wait-k Models for Simultaneous Machine Translation

Simultaneous machine translation consists in starting output generation ...