Prior Knowledge Integration for Neural Machine Translation using Posterior Regularization

11/02/2018
by   Jiacheng Zhang, et al.
0

Although neural machine translation has made significant progress recently, how to integrate multiple overlapping, arbitrary prior knowledge sources remains a challenge. In this work, we propose to use posterior regularization to provide a general framework for integrating prior knowledge into neural machine translation. We represent prior knowledge sources as features in a log-linear model, which guides the learning process of the neural translation model. Experiments on Chinese-English translation show that our approach leads to significant improvements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2016

Supervised Attentions for Neural Machine Translation

In this paper, we improve the attention or alignment accuracy of neural ...
research
12/24/2018

Moment Matching Training for Neural Machine Translation: A Preliminary Study

In previous works, neural sequence models have been shown to improve sig...
research
02/12/2021

Continuous Learning in Neural Machine Translation using Bilingual Dictionaries

While recent advances in deep learning led to significant improvements i...
research
09/27/2011

Generative Prior Knowledge for Discriminative Classification

We present a novel framework for integrating prior knowledge into discri...
research
04/30/2020

Language Model Prior for Low-Resource Neural Machine Translation

The scarcity of large parallel corpora is an important obstacle for neur...
research
05/10/2021

Self-Guided Curriculum Learning for Neural Machine Translation

In the field of machine learning, the well-trained model is assumed to b...
research
06/10/2022

A Novel Chinese Dialect TTS Frontend with Non-Autoregressive Neural Machine Translation

Chinese dialect text-to-speech(TTS) system usually can only be utilized ...

Please sign up or login with your details

Forgot password? Click here to reset