A Systematic Assessment of Syntactic Generalization in Neural Language Models

05/07/2020
by   Jennifer Hu, et al.
0

State-of-the-art neural network models have achieved dizzyingly low perplexity scores on major language modeling benchmarks, but it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge. Furthermore, existing work has not provided a clear picture about the model properties required to produce proper syntactic generalizations. We present a systematic evaluation of the syntactic knowledge of neural language models, testing 20 combinations of model types and data sizes on a set of 34 syntactic test suites. We find that model architecture clearly influences syntactic generalization performance: Transformer models and models with explicit hierarchical structure reliably outperform pure sequence models in their predictions. In contrast, we find no clear influence of the scale of training data on these syntactic generalization tests. We also find no clear relation between a model's perplexity and its syntactic generalization performance.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

05/10/2021

Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models

Multilingual Transformer-based language models, usually pretrained on mo...
05/12/2020

Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach

It is commonly believed that knowledge of syntactic structure should imp...
04/10/2020

Overestimation of Syntactic Representationin Neural Language Models

With the advent of powerful neural language models over the last few yea...
07/30/2021

Structural Guidance for Transformer Language Models

Transformer-based language models pre-trained on large amounts of text d...
09/24/2021

Transformers Generalize Linearly

Natural language exhibits patterns of hierarchically governed dependenci...
06/02/2020

On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior

Human reading behavior is tuned to the statistics of natural language: t...
05/31/2021

Effective Batching for Recurrent Neural Network Grammars

As a language model that integrates traditional symbolic operations and ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.