Korean-English Machine Translation with Multiple Tokenization Strategy

05/29/2021
by   Dojun Park, et al.
0

This work was conducted to find out how tokenization methods affect the training results of machine translation models. In this work, alphabet tokenization, morpheme tokenization, and BPE tokenization were applied to Korean as the source language and English as the target language respectively, and the comparison experiment was conducted by repeating 50,000 epochs of each 9 models using the Transformer neural network. As a result of measuring the BLEU scores of the experimental models, the model that applied BPE tokenization to Korean and morpheme tokenization to English recorded 35.73, showing the best performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/18/2022

Syntax-based data augmentation for Hungarian-English machine translation

We train Transformer-based neural machine translation models for Hungari...
research
01/10/2017

Bidirectional American Sign Language to English Translation

We outline a bidirectional translation system that converts sentences fr...
research
06/10/2019

Improving Neural Language Modeling via Adversarial Training

Recently, substantial progress has been made in language modeling by usi...
research
06/27/2021

Power Law Graph Transformer for Machine Translation and Representation Learning

We present the Power Law Graph Transformer, a transformer model with wel...
research
09/25/2018

Fast and Simple Mixture of Softmaxes with BPE and Hybrid-LightRNN for Language Generation

Mixture of Softmaxes (MoS) has been shown to be effective at addressing ...
research
10/01/2019

Putting Machine Translation in Context with the Noisy Channel Model

We show that Bayes' rule provides a compelling mechanism for controlling...
research
01/03/2020

Learning Accurate Integer Transformer Machine-Translation Models

We describe a method for training accurate Transformer machine-translati...

Please sign up or login with your details

Forgot password? Click here to reset