Robust Multilingual Part-of-Speech Tagging via Adversarial Training
Adversarial training (AT) is a powerful regularization method for neural networks, aiming to achieve robustness to input perturbations. Yet, the specific effects of the robustness obtained by AT are still unclear in the context of natural language processing. In this paper, we propose and analyze a neural POS tagging model that exploits adversarial training (AT). In our experiments on the Penn Treebank WSJ corpus and the Universal Dependencies (UD) dataset (28 languages), we find that AT not only improves the overall tagging accuracy, but also 1) largely prevents overfitting in low resource languages and 2) boosts tagging accuracy for rare / unseen words. The proposed POS tagger achieves state-of-the-art performance on nearly all of the languages in UD v1.2. We also demonstrate that 3) the improved tagging performance by AT contributes to the downstream task of dependency parsing, and that 4) AT helps the model to learn cleaner word and internal representations. These positive results motivate further use of AT for natural language tasks.
READ FULL TEXT