Graph-based Filtering of Out-of-Vocabulary Words for Encoder-Decoder Models

05/28/2018
by   Satoru Katsumata, et al.
0

Encoder-decoder models typically only employ words that are frequently used in the training corpus to reduce the computational costs and exclude noise. However, this vocabulary set may still include words that interfere with learning in encoder-decoder models. This paper proposes a method for selecting more suitable words for learning encoders by utilizing not only frequency, but also co-occurrence information, which we capture using the HITS algorithm. We apply our proposed method to two tasks: machine translation and grammatical error correction. For Japanese-to-English translation, this method achieves a BLEU score that is 0.56 points more than that of a baseline. It also outperforms the baseline method for English grammatical error correction, with an F0.5-measure that is 1.48 points higher.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2020

Stronger Baselines for Grammatical Error Correction Using Pretrained Encoder-Decoder Model

Grammatical error correction (GEC) literature has reported on the effect...
research
06/09/2016

Edinburgh Neural Machine Translation Systems for WMT 16

We participated in the WMT 2016 shared news translation task by building...
research
08/09/2022

ASR Error Correction with Constrained Decoding on Operation Prediction

Error correction techniques remain effective to refine outputs from auto...
research
03/29/2023

ProductAE: Toward Deep Learning Driven Error-Correction Codes of Large Dimensions

While decades of theoretical research have led to the invention of sever...
research
01/01/2023

Inflected Forms Are Redundant in Question Generation Models

Neural models with an encoder-decoder framework provide a feasible solut...
research
08/07/2022

Study of Encoder-Decoder Architectures for Code-Mix Search Query Translation

With the broad reach of the internet and smartphones, e-commerce platfor...
research
03/31/2016

Neural Language Correction with Character-Based Attention

Natural language correction has the potential to help language learners ...

Please sign up or login with your details

Forgot password? Click here to reset