A Systematic Assessment of Syntactic Generalization in Neural Language Models

by   Jennifer Hu, et al.

State-of-the-art neural network models have achieved dizzyingly low perplexity scores on major language modeling benchmarks, but it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge. Furthermore, existing work has not provided a clear picture about the model properties required to produce proper syntactic generalizations. We present a systematic evaluation of the syntactic knowledge of neural language models, testing 20 combinations of model types and data sizes on a set of 34 syntactic test suites. We find that model architecture clearly influences syntactic generalization performance: Transformer models and models with explicit hierarchical structure reliably outperform pure sequence models in their predictions. In contrast, we find no clear influence of the scale of training data on these syntactic generalization tests. We also find no clear relation between a model's perplexity and its syntactic generalization performance.



There are no comments yet.


page 8


Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models

Multilingual Transformer-based language models, usually pretrained on mo...

Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach

It is commonly believed that knowledge of syntactic structure should imp...

Overestimation of Syntactic Representationin Neural Language Models

With the advent of powerful neural language models over the last few yea...

Structural Guidance for Transformer Language Models

Transformer-based language models pre-trained on large amounts of text d...

Transformers Generalize Linearly

Natural language exhibits patterns of hierarchically governed dependenci...

On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior

Human reading behavior is tuned to the statistics of natural language: t...

Effective Batching for Recurrent Neural Network Grammars

As a language model that integrates traditional symbolic operations and ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.