Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation

08/15/2020
by   Goran Glavaš, et al.
0

Traditional NLP has long held (supervised) syntactic parsing necessary for successful higher-level language understanding. The recent advent of end-to-end neural language learning, self-supervised via language modeling (LM), and its success on a wide range of language understanding tasks, however, questions this belief. In this work, we empirically investigate the usefulness of supervised parsing for semantic language understanding in the context of LM-pretrained transformer networks. Relying on the established fine-tuning paradigm, we first couple a pretrained transformer with a biaffine parsing head, aiming to infuse explicit syntactic knowledge from Universal Dependencies (UD) treebanks into the transformer. We then fine-tune the model for language understanding (LU) tasks and measure the effect of the intermediate parsing training (IPT) on downstream LU performance. Results from both monolingual English and zero-shot language transfer experiments (with intermediate target-language parsing) show that explicit formalized syntax, injected into transformers through intermediate supervised parsing, has very limited and inconsistent effect on downstream LU performance. Our results, coupled with our analysis of transformers' representation spaces before and after intermediate parsing, make a significant step towards providing answers to an essential question: how (un)availing is supervised parsing for high-level semantic language understanding in the era of large neural models?

READ FULL TEXT

page 7

page 8

page 13

page 14

research
05/26/2020

English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too

Intermediate-task training has been shown to substantially improve pretr...
research
12/31/2020

Verb Knowledge Injection for Multilingual Event Processing

In parallel to their overwhelming success across NLP tasks, language abi...
research
11/15/2022

Breakpoint Transformers for Modeling and Tracking Intermediate Beliefs

Can we teach natural language understanding models to track their belief...
research
05/01/2020

Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?

While pretrained models such as BERT have shown large gains across natur...
research
11/10/2020

When Do You Need Billions of Words of Pretraining Data?

NLP is currently dominated by general-purpose pretrained language models...
research
10/24/2022

An Empirical Revisiting of Linguistic Knowledge Fusion in Language Understanding Tasks

Though linguistic knowledge emerges during large-scale language model pr...
research
05/27/2021

Diagnosing Transformers in Task-Oriented Semantic Parsing

Modern task-oriented semantic parsing approaches typically use seq2seq t...

Please sign up or login with your details

Forgot password? Click here to reset