Self-Training for Unsupervised Parsing with PRPN

05/27/2020
by   Anhad Mohananey, et al.
0

Neural unsupervised parsing (UP) models learn to parse without access to syntactic annotations, while being optimized for another task like language modeling. In this work, we propose self-training for neural UP models: we leverage aggregated annotations predicted by copies of our model as supervision for future copies. To be able to use our model's predictions during training, we extend a recent neural UP architecture, the PRPN (Shen et al., 2018a) such that it can be trained in a semi-supervised fashion. We then add examples with parses predicted by our model to our unlabeled UP training data. Our self-trained model outperforms the PRPN by 8.1 the art by 1.6 helpful for semi-supervised parsing in ultra-low-resource settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/29/2016

Semantic Parsing with Semi-Supervised Sequential Autoencoders

We present a novel semi-supervised approach for sequence transduction an...
10/06/2020

On the Role of Supervision in Unsupervised Constituency Parsing

We analyze several recent unsupervised constituency parsing models, whic...
11/11/2019

Deep Contextualized Self-training for Low Resource Dependency Parsing

Neural dependency parsing has proven very effective, achieving state-of-...
01/21/2020

Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference

Some NLP tasks can be solved in a fully unsupervised fashion by providin...
08/28/2018

Towards Semi-Supervised Learning for Deep Semantic Role Labeling

Neural models have shown several state-of-the-art performances on Semant...
10/07/2020

Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic Parsing

Task-oriented semantic parsing is a critical component of virtual assist...