Self-Training for Unsupervised Parsing with PRPN

by   Anhad Mohananey, et al.

Neural unsupervised parsing (UP) models learn to parse without access to syntactic annotations, while being optimized for another task like language modeling. In this work, we propose self-training for neural UP models: we leverage aggregated annotations predicted by copies of our model as supervision for future copies. To be able to use our model's predictions during training, we extend a recent neural UP architecture, the PRPN (Shen et al., 2018a) such that it can be trained in a semi-supervised fashion. We then add examples with parses predicted by our model to our unlabeled UP training data. Our self-trained model outperforms the PRPN by 8.1 the art by 1.6 helpful for semi-supervised parsing in ultra-low-resource settings.


page 1

page 2

page 3

page 4


Semantic Parsing with Semi-Supervised Sequential Autoencoders

We present a novel semi-supervised approach for sequence transduction an...

On the Role of Supervision in Unsupervised Constituency Parsing

We analyze several recent unsupervised constituency parsing models, whic...

Deep Contextualized Self-training for Low Resource Dependency Parsing

Neural dependency parsing has proven very effective, achieving state-of-...

Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference

Some NLP tasks can be solved in a fully unsupervised fashion by providin...

Towards Semi-Supervised Learning for Deep Semantic Role Labeling

Neural models have shown several state-of-the-art performances on Semant...

Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic Parsing

Task-oriented semantic parsing is a critical component of virtual assist...