DeepAI AI Chat
Log In Sign Up

Graph-based Filtering of Out-of-Vocabulary Words for Encoder-Decoder Models

by   Satoru Katsumata, et al.
Tokyo Metropolitan University

Encoder-decoder models typically only employ words that are frequently used in the training corpus to reduce the computational costs and exclude noise. However, this vocabulary set may still include words that interfere with learning in encoder-decoder models. This paper proposes a method for selecting more suitable words for learning encoders by utilizing not only frequency, but also co-occurrence information, which we capture using the HITS algorithm. We apply our proposed method to two tasks: machine translation and grammatical error correction. For Japanese-to-English translation, this method achieves a BLEU score that is 0.56 points more than that of a baseline. It also outperforms the baseline method for English grammatical error correction, with an F0.5-measure that is 1.48 points higher.


page 1

page 2

page 3

page 4


Stronger Baselines for Grammatical Error Correction Using Pretrained Encoder-Decoder Model

Grammatical error correction (GEC) literature has reported on the effect...

Edinburgh Neural Machine Translation Systems for WMT 16

We participated in the WMT 2016 shared news translation task by building...

Memory-Augmented Neural Networks for Machine Translation

Memory-augmented neural networks (MANNs) have been shown to outperform o...

Inflected Forms Are Redundant in Question Generation Models

Neural models with an encoder-decoder framework provide a feasible solut...

Japanese Predicate Conjugation for Neural Machine Translation

Neural machine translation (NMT) has a drawback in that can generate onl...

Study of Encoder-Decoder Architectures for Code-Mix Search Query Translation

With the broad reach of the internet and smartphones, e-commerce platfor...

Spelling Correction as a Foreign Language

In this paper, we reformulated the spell correction problem as a machine...