Bilingual Character Representation for Efficiently Addressing Out-of-Vocabulary Words in Code-Switching Named Entity Recognition

05/30/2018
by   Genta Indra Winata, et al.
0

We propose an LSTM-based model with hierarchical architecture on named entity recognition from code-switching Twitter data. Our model uses bilingual character representation and transfer learning to address out-of-vocabulary words. In order to mitigate data noise, we propose to use token replacement and normalization. In the 3rd Workshop on Computational Approaches to Linguistic Code-Switching Shared Task, we achieved second place with 62.76 F1-score for English-Spanish language pair without using any gazetteer and knowledge-based information.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset