Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

by   Najoung Kim, et al.
Brown University
Johns Hopkins University

We introduce a set of nine challenge tasks that test for the understanding of function words. These tasks are created by structurally mutating sentences from existing datasets to target the comprehension of specific types of function words (e.g., prepositions, wh-words). Using these probing tasks, we explore the effects of various pretraining objectives for sentence encoders (e.g., language modeling, CCG supertagging and natural language inference (NLI)) on the learned representations. Our results show that pretraining on CCG---our most syntactic objective---performs the best on average across our probing tasks, suggesting that syntactic knowledge helps function word comprehension. Language modeling also shows strong performance, supporting its widespread use for pretraining state-of-the-art NLP models. Overall, no pretraining objective dominates across the board, and our function word probing tasks highlight several intuitive differences between pretraining objectives, e.g., that NLI helps the comprehension of negation.


page 8

page 15


Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling

Work on the problem of contextualized word representation -- the develop...

Call for Papers – The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus

We present the call for papers for the BabyLM Challenge: Sample-efficien...

Several Experiments on Investigating Pretraining and Knowledge-Enhanced Models for Natural Language Inference

Natural language inference (NLI) is among the most challenging tasks in ...

Language Modeling Teaches You More Syntax than Translation Does: Lessons Learned Through Auxiliary Task Analysis

Recent work using auxiliary prediction task classifiers to investigate t...

Do Vision-and-Language Transformers Learn Grounded Predicate-Noun Dependencies?

Recent advances in vision-and-language modeling have seen the developmen...

Informing Unsupervised Pretraining with External Linguistic Knowledge

Unsupervised pretraining models have been shown to facilitate a wide ran...

Linguistic Knowledge and Transferability of Contextual Representations

Contextual word representations derived from large-scale neural language...

Please sign up or login with your details

Forgot password? Click here to reset